Apr 21 10:21:33.992980 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:21:33.993007 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:21:33.993015 kernel: BIOS-provided physical RAM map: Apr 21 10:21:33.993022 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 21 10:21:33.993028 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 21 10:21:33.993036 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 10:21:33.993043 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 21 10:21:33.993050 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 21 10:21:33.993056 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:21:33.993061 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 10:21:33.993068 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:21:33.993078 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 10:21:33.993085 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 21 10:21:33.993094 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 21 10:21:33.993101 kernel: NX (Execute Disable) protection: active Apr 21 10:21:33.993110 kernel: APIC: Static calls initialized Apr 21 10:21:33.993118 kernel: SMBIOS 2.8 present. Apr 21 10:21:33.993125 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 21 10:21:33.993131 kernel: Hypervisor detected: KVM Apr 21 10:21:33.993145 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:21:33.993155 kernel: kvm-clock: using sched offset of 5875010040 cycles Apr 21 10:21:33.993166 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:21:33.993174 kernel: tsc: Detected 2000.000 MHz processor Apr 21 10:21:33.993181 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:21:33.993187 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:21:33.993194 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 21 10:21:33.993201 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 10:21:33.993231 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:21:33.993245 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 21 10:21:33.993252 kernel: Using GB pages for direct mapping Apr 21 10:21:33.993262 kernel: ACPI: Early table checksum verification disabled Apr 21 10:21:33.993269 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 21 10:21:33.993276 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993283 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993289 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993295 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 21 10:21:33.993302 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993311 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993319 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993330 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993346 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 21 10:21:33.993353 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 21 10:21:33.993360 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 21 10:21:33.993370 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 21 10:21:33.993377 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 21 10:21:33.993383 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 21 10:21:33.993390 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 21 10:21:33.993397 kernel: No NUMA configuration found Apr 21 10:21:33.993404 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 21 10:21:33.993410 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Apr 21 10:21:33.993417 kernel: Zone ranges: Apr 21 10:21:33.993427 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:21:33.993434 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 10:21:33.993444 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:21:33.993454 kernel: Movable zone start for each node Apr 21 10:21:33.993461 kernel: Early memory node ranges Apr 21 10:21:33.993472 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 10:21:33.993478 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 21 10:21:33.993485 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:21:33.993492 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 21 10:21:33.993502 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:21:33.993512 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 10:21:33.993518 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 21 10:21:33.993525 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:21:33.993532 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:21:33.993538 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:21:33.993545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:21:33.993552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:21:33.993559 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:21:33.993565 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:21:33.993575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:21:33.993582 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:21:33.993593 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:21:33.993604 kernel: TSC deadline timer available Apr 21 10:21:33.993615 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:21:33.993623 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:21:33.993630 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:21:33.993637 kernel: kvm-guest: setup PV sched yield Apr 21 10:21:33.993643 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 10:21:33.993658 kernel: Booting paravirtualized kernel on KVM Apr 21 10:21:33.993669 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:21:33.993677 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:21:33.993684 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:21:33.993690 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:21:33.993697 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:21:33.993706 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:21:33.993714 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:21:33.993724 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:21:33.993741 kernel: random: crng init done Apr 21 10:21:33.993751 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:21:33.993763 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:21:33.993804 kernel: Fallback order for Node 0: 0 Apr 21 10:21:33.993815 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 21 10:21:33.993822 kernel: Policy zone: Normal Apr 21 10:21:33.993833 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:21:33.993844 kernel: software IO TLB: area num 2. Apr 21 10:21:33.993860 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227300K reserved, 0K cma-reserved) Apr 21 10:21:33.993872 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:21:33.993881 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:21:33.993891 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:21:33.993902 kernel: Dynamic Preempt: voluntary Apr 21 10:21:33.993914 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:21:33.993922 kernel: rcu: RCU event tracing is enabled. Apr 21 10:21:33.993930 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:21:33.993940 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:21:33.993950 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:21:33.993957 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:21:33.993963 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:21:33.993973 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:21:33.993982 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:21:33.993994 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:21:33.994004 kernel: Console: colour VGA+ 80x25 Apr 21 10:21:33.994010 kernel: printk: console [tty0] enabled Apr 21 10:21:33.994017 kernel: printk: console [ttyS0] enabled Apr 21 10:21:33.994028 kernel: ACPI: Core revision 20230628 Apr 21 10:21:33.994039 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:21:33.994047 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:21:33.994058 kernel: x2apic enabled Apr 21 10:21:33.994075 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:21:33.994085 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:21:33.994092 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:21:33.994098 kernel: kvm-guest: setup PV IPIs Apr 21 10:21:33.994110 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:21:33.994121 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 21 10:21:33.994132 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Apr 21 10:21:33.994144 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:21:33.994155 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 21 10:21:33.994162 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 21 10:21:33.994172 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:21:33.994183 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:21:33.994195 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:21:33.994776 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 21 10:21:33.994788 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 21 10:21:33.994796 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 21 10:21:33.994803 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 21 10:21:33.994811 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 21 10:21:33.994818 kernel: active return thunk: srso_alias_return_thunk Apr 21 10:21:33.994825 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 21 10:21:33.994831 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 21 10:21:33.994847 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:21:33.994859 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:21:33.994871 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:21:33.994882 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:21:33.994889 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:21:33.994895 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:21:33.994907 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 21 10:21:33.994914 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 21 10:21:33.994921 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:21:33.994931 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:21:33.994938 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:21:33.994944 kernel: landlock: Up and running. Apr 21 10:21:33.994951 kernel: SELinux: Initializing. Apr 21 10:21:33.994962 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:21:33.994972 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:21:33.994979 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 21 10:21:33.994986 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:21:33.994993 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:21:33.995002 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:21:33.995021 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 21 10:21:33.995034 kernel: ... version: 0 Apr 21 10:21:33.995042 kernel: ... bit width: 48 Apr 21 10:21:33.995081 kernel: ... generic registers: 6 Apr 21 10:21:33.995088 kernel: ... value mask: 0000ffffffffffff Apr 21 10:21:33.995094 kernel: ... max period: 00007fffffffffff Apr 21 10:21:33.995105 kernel: ... fixed-purpose events: 0 Apr 21 10:21:33.995117 kernel: ... event mask: 000000000000003f Apr 21 10:21:33.995134 kernel: signal: max sigframe size: 3376 Apr 21 10:21:33.995146 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:21:33.995155 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:21:33.995163 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:21:33.995169 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:21:33.995176 kernel: .... node #0, CPUs: #1 Apr 21 10:21:33.995183 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:21:33.995190 kernel: smpboot: Max logical packages: 1 Apr 21 10:21:33.995196 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 21 10:21:33.995230 kernel: devtmpfs: initialized Apr 21 10:21:33.995238 kernel: x86/mm: Memory block size: 128MB Apr 21 10:21:33.995245 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:21:33.995251 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:21:33.995258 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:21:33.995265 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:21:33.995275 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:21:33.995287 kernel: audit: type=2000 audit(1776766893.317:1): state=initialized audit_enabled=0 res=1 Apr 21 10:21:33.995297 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:21:33.995307 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:21:33.995314 kernel: cpuidle: using governor menu Apr 21 10:21:33.995321 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:21:33.995327 kernel: dca service started, version 1.12.1 Apr 21 10:21:33.995338 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:21:33.995350 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:21:33.995361 kernel: PCI: Using configuration type 1 for base access Apr 21 10:21:33.995368 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:21:33.995375 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:21:33.995385 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:21:33.995391 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:21:33.995398 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:21:33.995405 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:21:33.995415 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:21:33.995427 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:21:33.995438 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:21:33.995450 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:21:33.995461 kernel: ACPI: Interpreter enabled Apr 21 10:21:33.995472 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:21:33.995479 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:21:33.995488 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:21:33.995496 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:21:33.995503 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:21:33.995509 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:21:33.995786 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:21:33.995965 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:21:33.996145 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:21:33.996163 kernel: PCI host bridge to bus 0000:00 Apr 21 10:21:33.996355 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:21:33.996518 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:21:33.996666 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:21:33.996814 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 21 10:21:33.996997 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:21:33.997158 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 21 10:21:33.997336 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:21:33.997556 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:21:33.997748 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:21:33.997951 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 21 10:21:33.998285 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 21 10:21:33.998463 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 21 10:21:33.998627 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:21:33.998799 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 21 10:21:33.998934 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 21 10:21:33.999073 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 21 10:21:33.999386 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 10:21:34.000695 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:21:34.000887 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 21 10:21:34.001039 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 21 10:21:34.001175 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 10:21:34.002631 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 21 10:21:34.002785 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:21:34.002919 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:21:34.003067 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:21:34.006781 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 21 10:21:34.006928 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 21 10:21:34.007071 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:21:34.007199 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 21 10:21:34.007232 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:21:34.007241 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:21:34.007249 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:21:34.007262 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:21:34.007270 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:21:34.007278 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:21:34.007285 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:21:34.007293 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:21:34.007301 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:21:34.007309 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:21:34.007316 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:21:34.007324 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:21:34.007334 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:21:34.007342 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:21:34.007350 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:21:34.007357 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:21:34.007365 kernel: iommu: Default domain type: Translated Apr 21 10:21:34.007373 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:21:34.007380 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:21:34.007388 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:21:34.007397 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 21 10:21:34.007408 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 21 10:21:34.007549 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:21:34.007679 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:21:34.007807 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:21:34.007818 kernel: vgaarb: loaded Apr 21 10:21:34.007826 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:21:34.007834 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:21:34.007841 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:21:34.007849 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:21:34.007862 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:21:34.007869 kernel: pnp: PnP ACPI init Apr 21 10:21:34.008014 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:21:34.008027 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:21:34.008035 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:21:34.008043 kernel: NET: Registered PF_INET protocol family Apr 21 10:21:34.008050 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:21:34.008058 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:21:34.008070 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:21:34.008078 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:21:34.008085 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:21:34.008093 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:21:34.008101 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:21:34.008109 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:21:34.008117 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:21:34.008124 kernel: NET: Registered PF_XDP protocol family Apr 21 10:21:34.008343 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:21:34.008466 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:21:34.008582 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:21:34.008696 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 21 10:21:34.008809 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:21:34.008925 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 21 10:21:34.008934 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:21:34.008942 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 10:21:34.008950 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 21 10:21:34.008963 kernel: Initialise system trusted keyrings Apr 21 10:21:34.008970 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:21:34.008977 kernel: Key type asymmetric registered Apr 21 10:21:34.008984 kernel: Asymmetric key parser 'x509' registered Apr 21 10:21:34.008991 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:21:34.008998 kernel: io scheduler mq-deadline registered Apr 21 10:21:34.009006 kernel: io scheduler kyber registered Apr 21 10:21:34.009014 kernel: io scheduler bfq registered Apr 21 10:21:34.009021 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:21:34.009032 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:21:34.009039 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:21:34.009047 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:21:34.009055 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:21:34.009062 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:21:34.009069 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:21:34.009077 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:21:34.009238 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 21 10:21:34.009256 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:21:34.009378 kernel: rtc_cmos 00:03: registered as rtc0 Apr 21 10:21:34.009496 kernel: rtc_cmos 00:03: setting system clock to 2026-04-21T10:21:33 UTC (1776766893) Apr 21 10:21:34.009614 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:21:34.009624 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 21 10:21:34.009632 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:21:34.009640 kernel: Segment Routing with IPv6 Apr 21 10:21:34.009647 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:21:34.009655 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:21:34.009666 kernel: Key type dns_resolver registered Apr 21 10:21:34.009674 kernel: IPI shorthand broadcast: enabled Apr 21 10:21:34.009681 kernel: sched_clock: Marking stable (915003330, 353846040)->(1417921490, -149072120) Apr 21 10:21:34.009688 kernel: registered taskstats version 1 Apr 21 10:21:34.009696 kernel: Loading compiled-in X.509 certificates Apr 21 10:21:34.009703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:21:34.009711 kernel: Key type .fscrypt registered Apr 21 10:21:34.009718 kernel: Key type fscrypt-provisioning registered Apr 21 10:21:34.009726 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:21:34.009736 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:21:34.009743 kernel: ima: No architecture policies found Apr 21 10:21:34.009751 kernel: clk: Disabling unused clocks Apr 21 10:21:34.009758 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:21:34.009765 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:21:34.009772 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:21:34.009780 kernel: Run /init as init process Apr 21 10:21:34.009787 kernel: with arguments: Apr 21 10:21:34.009795 kernel: /init Apr 21 10:21:34.009805 kernel: with environment: Apr 21 10:21:34.009812 kernel: HOME=/ Apr 21 10:21:34.009820 kernel: TERM=linux Apr 21 10:21:34.009830 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:21:34.009841 systemd[1]: Detected virtualization kvm. Apr 21 10:21:34.009849 systemd[1]: Detected architecture x86-64. Apr 21 10:21:34.009857 systemd[1]: Running in initrd. Apr 21 10:21:34.009867 systemd[1]: No hostname configured, using default hostname. Apr 21 10:21:34.009874 systemd[1]: Hostname set to . Apr 21 10:21:34.009883 systemd[1]: Initializing machine ID from random generator. Apr 21 10:21:34.009891 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:21:34.009899 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:21:34.009923 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:21:34.009937 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:21:34.009945 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:21:34.009953 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:21:34.009961 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:21:34.009971 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:21:34.009979 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:21:34.009988 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:21:34.009998 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:21:34.010006 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:21:34.010014 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:21:34.010022 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:21:34.010031 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:21:34.010038 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:21:34.010046 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:21:34.010054 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:21:34.010065 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:21:34.010073 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:21:34.010081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:21:34.010089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:21:34.010097 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:21:34.010105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:21:34.010113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:21:34.010121 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:21:34.010130 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:21:34.010140 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:21:34.010149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:21:34.010156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:34.010191 systemd-journald[178]: Collecting audit messages is disabled. Apr 21 10:21:34.010432 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:21:34.010444 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:21:34.010454 systemd-journald[178]: Journal started Apr 21 10:21:34.010476 systemd-journald[178]: Runtime Journal (/run/log/journal/bdb52d22208f4558a14044af10cf8ca4) is 8.0M, max 78.3M, 70.3M free. Apr 21 10:21:34.001255 systemd-modules-load[179]: Inserted module 'overlay' Apr 21 10:21:34.016233 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:21:34.020428 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:21:34.035232 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:21:34.036984 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 21 10:21:34.123728 kernel: Bridge firewalling registered Apr 21 10:21:34.037385 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:21:34.134360 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:21:34.138782 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:21:34.152523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:34.155106 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:21:34.164475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:21:34.167508 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:21:34.181868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:21:34.196310 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:21:34.199314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:34.208658 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:21:34.212234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:21:34.214409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:21:34.224846 dracut-cmdline[207]: dracut-dracut-053 Apr 21 10:21:34.226384 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:21:34.230834 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:21:34.262446 systemd-resolved[216]: Positive Trust Anchors: Apr 21 10:21:34.262466 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:21:34.262497 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:21:34.267099 systemd-resolved[216]: Defaulting to hostname 'linux'. Apr 21 10:21:34.270925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:21:34.272285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:21:34.331263 kernel: SCSI subsystem initialized Apr 21 10:21:34.341248 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:21:34.354246 kernel: iscsi: registered transport (tcp) Apr 21 10:21:34.378350 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:21:34.378410 kernel: QLogic iSCSI HBA Driver Apr 21 10:21:34.426864 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:21:34.432355 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:21:34.473106 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:21:34.473193 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:21:34.473234 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:21:34.519245 kernel: raid6: avx2x4 gen() 30641 MB/s Apr 21 10:21:34.537299 kernel: raid6: avx2x2 gen() 29021 MB/s Apr 21 10:21:34.555360 kernel: raid6: avx2x1 gen() 23516 MB/s Apr 21 10:21:34.555446 kernel: raid6: using algorithm avx2x4 gen() 30641 MB/s Apr 21 10:21:34.575559 kernel: raid6: .... xor() 4628 MB/s, rmw enabled Apr 21 10:21:34.575634 kernel: raid6: using avx2x2 recovery algorithm Apr 21 10:21:34.599248 kernel: xor: automatically using best checksumming function avx Apr 21 10:21:34.733252 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:21:34.747696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:21:34.754531 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:21:34.771279 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 21 10:21:34.776276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:21:34.783554 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:21:34.804514 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Apr 21 10:21:34.836758 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:21:34.842468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:21:34.918392 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:21:34.928367 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:21:34.943138 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:21:34.948652 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:21:34.950437 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:21:34.952487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:21:34.959647 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:21:34.974132 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:21:34.997327 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:21:35.024238 kernel: scsi host0: Virtio SCSI HBA Apr 21 10:21:35.029896 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:21:35.030612 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:35.036995 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 10:21:35.035678 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:21:35.040338 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:21:35.040659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:35.279721 kernel: libata version 3.00 loaded. Apr 21 10:21:35.279753 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:21:35.279764 kernel: AES CTR mode by8 optimization enabled Apr 21 10:21:35.043119 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:35.237033 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:35.323232 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:21:35.323498 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:21:35.325221 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:21:35.325413 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:21:35.330255 kernel: scsi host1: ahci Apr 21 10:21:35.332517 kernel: scsi host2: ahci Apr 21 10:21:35.332695 kernel: scsi host3: ahci Apr 21 10:21:35.332858 kernel: scsi host4: ahci Apr 21 10:21:35.334512 kernel: scsi host5: ahci Apr 21 10:21:35.335838 kernel: scsi host6: ahci Apr 21 10:21:35.336023 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 21 10:21:35.336234 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 21 10:21:35.336247 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 21 10:21:35.336257 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 21 10:21:35.336267 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 21 10:21:35.336277 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 21 10:21:35.336287 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 21 10:21:35.336297 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 21 10:21:35.336523 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:21:35.336704 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 21 10:21:35.336860 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 10:21:35.342185 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:21:35.342232 kernel: GPT:9289727 != 167739391 Apr 21 10:21:35.342244 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:21:35.342255 kernel: GPT:9289727 != 167739391 Apr 21 10:21:35.342265 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:21:35.342275 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:21:35.342285 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:21:35.463701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:35.481585 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:21:35.505939 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:35.652245 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.652369 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.652383 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.652394 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.654226 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.659238 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.708285 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (454) Apr 21 10:21:35.712239 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (446) Apr 21 10:21:35.719976 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 10:21:35.726257 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 10:21:35.732423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:21:35.737945 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 10:21:35.738973 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 10:21:35.746388 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:21:35.766363 disk-uuid[569]: Primary Header is updated. Apr 21 10:21:35.766363 disk-uuid[569]: Secondary Entries is updated. Apr 21 10:21:35.766363 disk-uuid[569]: Secondary Header is updated. Apr 21 10:21:35.772227 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:21:35.781221 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:21:36.782274 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:21:36.783224 disk-uuid[570]: The operation has completed successfully. Apr 21 10:21:36.836525 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:21:36.836665 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:21:36.852374 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:21:36.856664 sh[584]: Success Apr 21 10:21:36.874237 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:21:36.918723 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:21:36.935311 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:21:36.937512 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:21:36.958460 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:21:36.958494 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:36.961959 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:21:36.967509 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:21:36.967530 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:21:36.979245 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:21:36.982492 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:21:36.984493 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:21:36.991555 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:21:37.007515 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:21:37.030229 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:37.030288 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:37.034714 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:21:37.042415 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:21:37.042445 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:21:37.057735 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:21:37.062867 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:37.070473 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:21:37.078533 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:21:37.134613 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:21:37.144378 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:21:37.174472 ignition[703]: Ignition 2.19.0 Apr 21 10:21:37.174489 ignition[703]: Stage: fetch-offline Apr 21 10:21:37.174534 ignition[703]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:37.174546 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:37.174645 ignition[703]: parsed url from cmdline: "" Apr 21 10:21:37.179345 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:21:37.174650 ignition[703]: no config URL provided Apr 21 10:21:37.179671 systemd-networkd[766]: lo: Link UP Apr 21 10:21:37.174656 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:21:37.179676 systemd-networkd[766]: lo: Gained carrier Apr 21 10:21:37.174667 ignition[703]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:21:37.181845 systemd-networkd[766]: Enumeration completed Apr 21 10:21:37.174672 ignition[703]: failed to fetch config: resource requires networking Apr 21 10:21:37.182769 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:21:37.174848 ignition[703]: Ignition finished successfully Apr 21 10:21:37.182774 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:21:37.184004 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:21:37.185053 systemd-networkd[766]: eth0: Link UP Apr 21 10:21:37.185058 systemd-networkd[766]: eth0: Gained carrier Apr 21 10:21:37.185066 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:21:37.185451 systemd[1]: Reached target network.target - Network. Apr 21 10:21:37.192463 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:21:37.210277 ignition[773]: Ignition 2.19.0 Apr 21 10:21:37.210292 ignition[773]: Stage: fetch Apr 21 10:21:37.210592 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:37.210608 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:37.210750 ignition[773]: parsed url from cmdline: "" Apr 21 10:21:37.210756 ignition[773]: no config URL provided Apr 21 10:21:37.210763 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:21:37.210775 ignition[773]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:21:37.210803 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 21 10:21:37.211035 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:21:37.411260 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 21 10:21:37.411443 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:21:37.811854 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 21 10:21:37.812276 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:21:37.914282 systemd-networkd[766]: eth0: DHCPv4 address 172.234.196.117/24, gateway 172.234.196.1 acquired from 23.213.15.244 Apr 21 10:21:38.612455 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 21 10:21:38.709034 ignition[773]: PUT result: OK Apr 21 10:21:38.709747 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 21 10:21:38.818266 ignition[773]: GET result: OK Apr 21 10:21:38.818386 ignition[773]: parsing config with SHA512: ed25d5f9842617a76b51dbdc625804f2e529de8d467d45d7f6f10b1c3048dd23c4667dc3d54850a2495e518c80db4862fe6f82e08a8d1ef6b914fea3109e15f6 Apr 21 10:21:38.821689 unknown[773]: fetched base config from "system" Apr 21 10:21:38.821707 unknown[773]: fetched base config from "system" Apr 21 10:21:38.822026 ignition[773]: fetch: fetch complete Apr 21 10:21:38.821714 unknown[773]: fetched user config from "akamai" Apr 21 10:21:38.822032 ignition[773]: fetch: fetch passed Apr 21 10:21:38.824529 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:21:38.822072 ignition[773]: Ignition finished successfully Apr 21 10:21:38.830395 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:21:38.845129 ignition[781]: Ignition 2.19.0 Apr 21 10:21:38.845143 ignition[781]: Stage: kargs Apr 21 10:21:38.845506 ignition[781]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:38.845518 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:38.848947 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:21:38.846389 ignition[781]: kargs: kargs passed Apr 21 10:21:38.846440 ignition[781]: Ignition finished successfully Apr 21 10:21:38.864372 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:21:38.878834 ignition[787]: Ignition 2.19.0 Apr 21 10:21:38.878848 ignition[787]: Stage: disks Apr 21 10:21:38.879000 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:38.879015 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:38.881148 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:21:38.879688 ignition[787]: disks: disks passed Apr 21 10:21:38.905160 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:21:38.879729 ignition[787]: Ignition finished successfully Apr 21 10:21:38.906230 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:21:38.907692 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:21:38.909046 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:21:38.910655 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:21:38.917384 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:21:38.934518 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:21:38.938415 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:21:38.945518 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:21:39.036344 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:21:39.036951 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:21:39.038785 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:21:39.044264 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:21:39.047312 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:21:39.050144 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:21:39.050187 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:21:39.050981 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:21:39.061229 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Apr 21 10:21:39.065422 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:39.065446 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:39.066586 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:21:39.070610 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:21:39.077650 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:21:39.077683 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:21:39.079337 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:21:39.081698 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:21:39.129556 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:21:39.135359 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:21:39.140891 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:21:39.147624 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:21:39.158611 systemd-networkd[766]: eth0: Gained IPv6LL Apr 21 10:21:39.245916 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:21:39.255445 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:21:39.258399 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:21:39.267051 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:21:39.270934 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:39.290496 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:21:39.297831 ignition[921]: INFO : Ignition 2.19.0 Apr 21 10:21:39.297831 ignition[921]: INFO : Stage: mount Apr 21 10:21:39.301291 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:39.301291 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:39.301291 ignition[921]: INFO : mount: mount passed Apr 21 10:21:39.301291 ignition[921]: INFO : Ignition finished successfully Apr 21 10:21:39.302052 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:21:39.310336 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:21:40.050399 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:21:40.064272 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (933) Apr 21 10:21:40.069445 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:40.069479 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:40.072523 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:21:40.079784 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:21:40.079836 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:21:40.085588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:21:40.111936 ignition[950]: INFO : Ignition 2.19.0 Apr 21 10:21:40.113024 ignition[950]: INFO : Stage: files Apr 21 10:21:40.113918 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:40.114853 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:40.117252 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:21:40.119545 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:21:40.119545 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:21:40.122957 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:21:40.124410 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:21:40.125956 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:21:40.125117 unknown[950]: wrote ssh authorized keys file for user: core Apr 21 10:21:40.128446 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:21:40.129816 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:21:40.359777 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:21:40.395540 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:21:40.395540 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 21 10:21:40.905583 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:21:41.161665 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:21:41.161665 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:21:41.166429 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:21:41.166429 ignition[950]: INFO : files: files passed Apr 21 10:21:41.166429 ignition[950]: INFO : Ignition finished successfully Apr 21 10:21:41.166710 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:21:41.200358 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:21:41.203319 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:21:41.205529 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:21:41.205641 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:21:41.220314 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:21:41.222018 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:21:41.223732 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:21:41.225195 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:21:41.226473 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:21:41.239329 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:21:41.269082 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:21:41.269252 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:21:41.271028 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:21:41.272924 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:21:41.274620 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:21:41.280339 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:21:41.294719 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:21:41.300398 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:21:41.311569 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:21:41.312529 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:21:41.314428 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:21:41.315988 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:21:41.316115 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:21:41.319013 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:21:41.320076 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:21:41.321698 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:21:41.323124 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:21:41.324846 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:21:41.326498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:21:41.328229 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:21:41.330156 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:21:41.331861 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:21:41.333715 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:21:41.335149 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:21:41.335388 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:21:41.336997 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:21:41.338181 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:21:41.339847 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:21:41.339973 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:21:41.341567 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:21:41.341681 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:21:41.343814 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:21:41.343935 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:21:41.344926 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:21:41.345245 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:21:41.356391 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:21:41.357242 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:21:41.357395 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:21:41.364401 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:21:41.365900 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:21:41.366993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:21:41.367873 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:21:41.373313 ignition[1002]: INFO : Ignition 2.19.0 Apr 21 10:21:41.373313 ignition[1002]: INFO : Stage: umount Apr 21 10:21:41.373313 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:41.373313 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:41.373313 ignition[1002]: INFO : umount: umount passed Apr 21 10:21:41.373313 ignition[1002]: INFO : Ignition finished successfully Apr 21 10:21:41.367976 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:21:41.373730 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:21:41.373842 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:21:41.381413 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:21:41.381547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:21:41.384837 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:21:41.384891 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:21:41.385718 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:21:41.385770 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:21:41.387741 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:21:41.387794 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:21:41.391290 systemd[1]: Stopped target network.target - Network. Apr 21 10:21:41.392335 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:21:41.392393 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:21:41.395595 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:21:41.398198 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:21:41.424475 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:21:41.425597 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:21:41.427261 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:21:41.428975 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:21:41.429031 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:21:41.430823 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:21:41.430877 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:21:41.432453 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:21:41.432509 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:21:41.434143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:21:41.434194 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:21:41.436308 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:21:41.438380 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:21:41.441774 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:21:41.442670 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:21:41.442779 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:21:41.443423 systemd-networkd[766]: eth0: DHCPv6 lease lost Apr 21 10:21:41.445519 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:21:41.445607 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:21:41.447694 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:21:41.447833 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:21:41.451756 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:21:41.451917 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:21:41.454973 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:21:41.455018 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:21:41.462368 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:21:41.463590 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:21:41.463647 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:21:41.465274 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:21:41.465325 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:21:41.466963 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:21:41.467016 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:21:41.467776 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:21:41.467825 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:21:41.469593 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:21:41.485729 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:21:41.485908 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:21:41.488460 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:21:41.488569 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:21:41.490188 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:21:41.490282 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:21:41.491452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:21:41.491496 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:21:41.492866 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:21:41.492922 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:21:41.494992 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:21:41.495043 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:21:41.496467 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:21:41.496518 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:41.506465 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:21:41.507321 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:21:41.507379 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:21:41.511570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:21:41.511627 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:41.512941 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:21:41.513053 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:21:41.514877 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:21:41.523530 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:21:41.531370 systemd[1]: Switching root. Apr 21 10:21:41.563677 systemd-journald[178]: Journal stopped Apr 21 10:21:33.992980 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:21:33.993007 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:21:33.993015 kernel: BIOS-provided physical RAM map: Apr 21 10:21:33.993022 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 21 10:21:33.993028 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 21 10:21:33.993036 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 10:21:33.993043 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 21 10:21:33.993050 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 21 10:21:33.993056 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:21:33.993061 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 10:21:33.993068 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:21:33.993078 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 10:21:33.993085 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 21 10:21:33.993094 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 21 10:21:33.993101 kernel: NX (Execute Disable) protection: active Apr 21 10:21:33.993110 kernel: APIC: Static calls initialized Apr 21 10:21:33.993118 kernel: SMBIOS 2.8 present. Apr 21 10:21:33.993125 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 21 10:21:33.993131 kernel: Hypervisor detected: KVM Apr 21 10:21:33.993145 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:21:33.993155 kernel: kvm-clock: using sched offset of 5875010040 cycles Apr 21 10:21:33.993166 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:21:33.993174 kernel: tsc: Detected 2000.000 MHz processor Apr 21 10:21:33.993181 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:21:33.993187 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:21:33.993194 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 21 10:21:33.993201 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 10:21:33.993231 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:21:33.993245 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 21 10:21:33.993252 kernel: Using GB pages for direct mapping Apr 21 10:21:33.993262 kernel: ACPI: Early table checksum verification disabled Apr 21 10:21:33.993269 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 21 10:21:33.993276 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993283 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993289 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993295 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 21 10:21:33.993302 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993311 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993319 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993330 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:21:33.993346 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 21 10:21:33.993353 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 21 10:21:33.993360 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 21 10:21:33.993370 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 21 10:21:33.993377 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 21 10:21:33.993383 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 21 10:21:33.993390 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 21 10:21:33.993397 kernel: No NUMA configuration found Apr 21 10:21:33.993404 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 21 10:21:33.993410 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Apr 21 10:21:33.993417 kernel: Zone ranges: Apr 21 10:21:33.993427 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:21:33.993434 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 10:21:33.993444 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:21:33.993454 kernel: Movable zone start for each node Apr 21 10:21:33.993461 kernel: Early memory node ranges Apr 21 10:21:33.993472 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 10:21:33.993478 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 21 10:21:33.993485 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:21:33.993492 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 21 10:21:33.993502 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:21:33.993512 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 10:21:33.993518 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 21 10:21:33.993525 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:21:33.993532 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:21:33.993538 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:21:33.993545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:21:33.993552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:21:33.993559 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:21:33.993565 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:21:33.993575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:21:33.993582 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:21:33.993593 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:21:33.993604 kernel: TSC deadline timer available Apr 21 10:21:33.993615 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:21:33.993623 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:21:33.993630 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:21:33.993637 kernel: kvm-guest: setup PV sched yield Apr 21 10:21:33.993643 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 10:21:33.993658 kernel: Booting paravirtualized kernel on KVM Apr 21 10:21:33.993669 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:21:33.993677 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:21:33.993684 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:21:33.993690 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:21:33.993697 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:21:33.993706 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:21:33.993714 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:21:33.993724 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:21:33.993741 kernel: random: crng init done Apr 21 10:21:33.993751 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:21:33.993763 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:21:33.993804 kernel: Fallback order for Node 0: 0 Apr 21 10:21:33.993815 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 21 10:21:33.993822 kernel: Policy zone: Normal Apr 21 10:21:33.993833 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:21:33.993844 kernel: software IO TLB: area num 2. Apr 21 10:21:33.993860 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227300K reserved, 0K cma-reserved) Apr 21 10:21:33.993872 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:21:33.993881 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:21:33.993891 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:21:33.993902 kernel: Dynamic Preempt: voluntary Apr 21 10:21:33.993914 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:21:33.993922 kernel: rcu: RCU event tracing is enabled. Apr 21 10:21:33.993930 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:21:33.993940 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:21:33.993950 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:21:33.993957 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:21:33.993963 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:21:33.993973 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:21:33.993982 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:21:33.993994 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:21:33.994004 kernel: Console: colour VGA+ 80x25 Apr 21 10:21:33.994010 kernel: printk: console [tty0] enabled Apr 21 10:21:33.994017 kernel: printk: console [ttyS0] enabled Apr 21 10:21:33.994028 kernel: ACPI: Core revision 20230628 Apr 21 10:21:33.994039 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:21:33.994047 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:21:33.994058 kernel: x2apic enabled Apr 21 10:21:33.994075 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:21:33.994085 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:21:33.994092 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:21:33.994098 kernel: kvm-guest: setup PV IPIs Apr 21 10:21:33.994110 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:21:33.994121 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 21 10:21:33.994132 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Apr 21 10:21:33.994144 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:21:33.994155 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 21 10:21:33.994162 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 21 10:21:33.994172 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:21:33.994183 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:21:33.994195 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:21:33.994776 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 21 10:21:33.994788 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 21 10:21:33.994796 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 21 10:21:33.994803 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 21 10:21:33.994811 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 21 10:21:33.994818 kernel: active return thunk: srso_alias_return_thunk Apr 21 10:21:33.994825 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 21 10:21:33.994831 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 21 10:21:33.994847 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:21:33.994859 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:21:33.994871 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:21:33.994882 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:21:33.994889 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:21:33.994895 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:21:33.994907 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 21 10:21:33.994914 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 21 10:21:33.994921 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:21:33.994931 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:21:33.994938 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:21:33.994944 kernel: landlock: Up and running. Apr 21 10:21:33.994951 kernel: SELinux: Initializing. Apr 21 10:21:33.994962 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:21:33.994972 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:21:33.994979 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 21 10:21:33.994986 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:21:33.994993 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:21:33.995002 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:21:33.995021 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 21 10:21:33.995034 kernel: ... version: 0 Apr 21 10:21:33.995042 kernel: ... bit width: 48 Apr 21 10:21:33.995081 kernel: ... generic registers: 6 Apr 21 10:21:33.995088 kernel: ... value mask: 0000ffffffffffff Apr 21 10:21:33.995094 kernel: ... max period: 00007fffffffffff Apr 21 10:21:33.995105 kernel: ... fixed-purpose events: 0 Apr 21 10:21:33.995117 kernel: ... event mask: 000000000000003f Apr 21 10:21:33.995134 kernel: signal: max sigframe size: 3376 Apr 21 10:21:33.995146 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:21:33.995155 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:21:33.995163 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:21:33.995169 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:21:33.995176 kernel: .... node #0, CPUs: #1 Apr 21 10:21:33.995183 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:21:33.995190 kernel: smpboot: Max logical packages: 1 Apr 21 10:21:33.995196 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 21 10:21:33.995230 kernel: devtmpfs: initialized Apr 21 10:21:33.995238 kernel: x86/mm: Memory block size: 128MB Apr 21 10:21:33.995245 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:21:33.995251 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:21:33.995258 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:21:33.995265 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:21:33.995275 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:21:33.995287 kernel: audit: type=2000 audit(1776766893.317:1): state=initialized audit_enabled=0 res=1 Apr 21 10:21:33.995297 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:21:33.995307 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:21:33.995314 kernel: cpuidle: using governor menu Apr 21 10:21:33.995321 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:21:33.995327 kernel: dca service started, version 1.12.1 Apr 21 10:21:33.995338 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:21:33.995350 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:21:33.995361 kernel: PCI: Using configuration type 1 for base access Apr 21 10:21:33.995368 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:21:33.995375 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:21:33.995385 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:21:33.995391 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:21:33.995398 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:21:33.995405 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:21:33.995415 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:21:33.995427 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:21:33.995438 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:21:33.995450 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:21:33.995461 kernel: ACPI: Interpreter enabled Apr 21 10:21:33.995472 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:21:33.995479 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:21:33.995488 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:21:33.995496 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:21:33.995503 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:21:33.995509 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:21:33.995786 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:21:33.995965 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:21:33.996145 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:21:33.996163 kernel: PCI host bridge to bus 0000:00 Apr 21 10:21:33.996355 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:21:33.996518 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:21:33.996666 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:21:33.996814 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 21 10:21:33.996997 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:21:33.997158 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 21 10:21:33.997336 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:21:33.997556 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:21:33.997748 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:21:33.997951 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 21 10:21:33.998285 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 21 10:21:33.998463 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 21 10:21:33.998627 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:21:33.998799 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 21 10:21:33.998934 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 21 10:21:33.999073 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 21 10:21:33.999386 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 10:21:34.000695 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:21:34.000887 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 21 10:21:34.001039 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 21 10:21:34.001175 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 10:21:34.002631 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 21 10:21:34.002785 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:21:34.002919 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:21:34.003067 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:21:34.006781 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 21 10:21:34.006928 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 21 10:21:34.007071 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:21:34.007199 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 21 10:21:34.007232 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:21:34.007241 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:21:34.007249 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:21:34.007262 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:21:34.007270 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:21:34.007278 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:21:34.007285 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:21:34.007293 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:21:34.007301 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:21:34.007309 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:21:34.007316 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:21:34.007324 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:21:34.007334 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:21:34.007342 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:21:34.007350 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:21:34.007357 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:21:34.007365 kernel: iommu: Default domain type: Translated Apr 21 10:21:34.007373 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:21:34.007380 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:21:34.007388 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:21:34.007397 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 21 10:21:34.007408 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 21 10:21:34.007549 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:21:34.007679 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:21:34.007807 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:21:34.007818 kernel: vgaarb: loaded Apr 21 10:21:34.007826 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:21:34.007834 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:21:34.007841 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:21:34.007849 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:21:34.007862 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:21:34.007869 kernel: pnp: PnP ACPI init Apr 21 10:21:34.008014 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:21:34.008027 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:21:34.008035 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:21:34.008043 kernel: NET: Registered PF_INET protocol family Apr 21 10:21:34.008050 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:21:34.008058 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:21:34.008070 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:21:34.008078 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:21:34.008085 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:21:34.008093 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:21:34.008101 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:21:34.008109 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:21:34.008117 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:21:34.008124 kernel: NET: Registered PF_XDP protocol family Apr 21 10:21:34.008343 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:21:34.008466 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:21:34.008582 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:21:34.008696 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 21 10:21:34.008809 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:21:34.008925 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 21 10:21:34.008934 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:21:34.008942 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 10:21:34.008950 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 21 10:21:34.008963 kernel: Initialise system trusted keyrings Apr 21 10:21:34.008970 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:21:34.008977 kernel: Key type asymmetric registered Apr 21 10:21:34.008984 kernel: Asymmetric key parser 'x509' registered Apr 21 10:21:34.008991 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:21:34.008998 kernel: io scheduler mq-deadline registered Apr 21 10:21:34.009006 kernel: io scheduler kyber registered Apr 21 10:21:34.009014 kernel: io scheduler bfq registered Apr 21 10:21:34.009021 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:21:34.009032 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:21:34.009039 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:21:34.009047 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:21:34.009055 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:21:34.009062 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:21:34.009069 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:21:34.009077 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:21:34.009238 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 21 10:21:34.009256 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:21:34.009378 kernel: rtc_cmos 00:03: registered as rtc0 Apr 21 10:21:34.009496 kernel: rtc_cmos 00:03: setting system clock to 2026-04-21T10:21:33 UTC (1776766893) Apr 21 10:21:34.009614 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:21:34.009624 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 21 10:21:34.009632 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:21:34.009640 kernel: Segment Routing with IPv6 Apr 21 10:21:34.009647 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:21:34.009655 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:21:34.009666 kernel: Key type dns_resolver registered Apr 21 10:21:34.009674 kernel: IPI shorthand broadcast: enabled Apr 21 10:21:34.009681 kernel: sched_clock: Marking stable (915003330, 353846040)->(1417921490, -149072120) Apr 21 10:21:34.009688 kernel: registered taskstats version 1 Apr 21 10:21:34.009696 kernel: Loading compiled-in X.509 certificates Apr 21 10:21:34.009703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:21:34.009711 kernel: Key type .fscrypt registered Apr 21 10:21:34.009718 kernel: Key type fscrypt-provisioning registered Apr 21 10:21:34.009726 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:21:34.009736 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:21:34.009743 kernel: ima: No architecture policies found Apr 21 10:21:34.009751 kernel: clk: Disabling unused clocks Apr 21 10:21:34.009758 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:21:34.009765 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:21:34.009772 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:21:34.009780 kernel: Run /init as init process Apr 21 10:21:34.009787 kernel: with arguments: Apr 21 10:21:34.009795 kernel: /init Apr 21 10:21:34.009805 kernel: with environment: Apr 21 10:21:34.009812 kernel: HOME=/ Apr 21 10:21:34.009820 kernel: TERM=linux Apr 21 10:21:34.009830 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:21:34.009841 systemd[1]: Detected virtualization kvm. Apr 21 10:21:34.009849 systemd[1]: Detected architecture x86-64. Apr 21 10:21:34.009857 systemd[1]: Running in initrd. Apr 21 10:21:34.009867 systemd[1]: No hostname configured, using default hostname. Apr 21 10:21:34.009874 systemd[1]: Hostname set to . Apr 21 10:21:34.009883 systemd[1]: Initializing machine ID from random generator. Apr 21 10:21:34.009891 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:21:34.009899 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:21:34.009923 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:21:34.009937 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:21:34.009945 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:21:34.009953 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:21:34.009961 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:21:34.009971 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:21:34.009979 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:21:34.009988 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:21:34.009998 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:21:34.010006 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:21:34.010014 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:21:34.010022 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:21:34.010031 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:21:34.010038 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:21:34.010046 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:21:34.010054 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:21:34.010065 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:21:34.010073 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:21:34.010081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:21:34.010089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:21:34.010097 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:21:34.010105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:21:34.010113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:21:34.010121 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:21:34.010130 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:21:34.010140 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:21:34.010149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:21:34.010156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:34.010191 systemd-journald[178]: Collecting audit messages is disabled. Apr 21 10:21:34.010432 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:21:34.010444 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:21:34.010454 systemd-journald[178]: Journal started Apr 21 10:21:34.010476 systemd-journald[178]: Runtime Journal (/run/log/journal/bdb52d22208f4558a14044af10cf8ca4) is 8.0M, max 78.3M, 70.3M free. Apr 21 10:21:34.001255 systemd-modules-load[179]: Inserted module 'overlay' Apr 21 10:21:34.016233 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:21:34.020428 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:21:34.035232 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:21:34.036984 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 21 10:21:34.123728 kernel: Bridge firewalling registered Apr 21 10:21:34.037385 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:21:34.134360 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:21:34.138782 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:21:34.152523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:34.155106 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:21:34.164475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:21:34.167508 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:21:34.181868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:21:34.196310 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:21:34.199314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:34.208658 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:21:34.212234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:21:34.214409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:21:34.224846 dracut-cmdline[207]: dracut-dracut-053 Apr 21 10:21:34.226384 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:21:34.230834 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:21:34.262446 systemd-resolved[216]: Positive Trust Anchors: Apr 21 10:21:34.262466 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:21:34.262497 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:21:34.267099 systemd-resolved[216]: Defaulting to hostname 'linux'. Apr 21 10:21:34.270925 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:21:34.272285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:21:34.331263 kernel: SCSI subsystem initialized Apr 21 10:21:34.341248 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:21:34.354246 kernel: iscsi: registered transport (tcp) Apr 21 10:21:34.378350 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:21:34.378410 kernel: QLogic iSCSI HBA Driver Apr 21 10:21:34.426864 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:21:34.432355 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:21:34.473106 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:21:34.473193 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:21:34.473234 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:21:34.519245 kernel: raid6: avx2x4 gen() 30641 MB/s Apr 21 10:21:34.537299 kernel: raid6: avx2x2 gen() 29021 MB/s Apr 21 10:21:34.555360 kernel: raid6: avx2x1 gen() 23516 MB/s Apr 21 10:21:34.555446 kernel: raid6: using algorithm avx2x4 gen() 30641 MB/s Apr 21 10:21:34.575559 kernel: raid6: .... xor() 4628 MB/s, rmw enabled Apr 21 10:21:34.575634 kernel: raid6: using avx2x2 recovery algorithm Apr 21 10:21:34.599248 kernel: xor: automatically using best checksumming function avx Apr 21 10:21:34.733252 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:21:34.747696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:21:34.754531 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:21:34.771279 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 21 10:21:34.776276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:21:34.783554 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:21:34.804514 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Apr 21 10:21:34.836758 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:21:34.842468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:21:34.918392 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:21:34.928367 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:21:34.943138 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:21:34.948652 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:21:34.950437 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:21:34.952487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:21:34.959647 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:21:34.974132 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:21:34.997327 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:21:35.024238 kernel: scsi host0: Virtio SCSI HBA Apr 21 10:21:35.029896 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:21:35.030612 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:35.036995 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 10:21:35.035678 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:21:35.040338 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:21:35.040659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:35.279721 kernel: libata version 3.00 loaded. Apr 21 10:21:35.279753 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:21:35.279764 kernel: AES CTR mode by8 optimization enabled Apr 21 10:21:35.043119 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:35.237033 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:35.323232 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:21:35.323498 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:21:35.325221 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:21:35.325413 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:21:35.330255 kernel: scsi host1: ahci Apr 21 10:21:35.332517 kernel: scsi host2: ahci Apr 21 10:21:35.332695 kernel: scsi host3: ahci Apr 21 10:21:35.332858 kernel: scsi host4: ahci Apr 21 10:21:35.334512 kernel: scsi host5: ahci Apr 21 10:21:35.335838 kernel: scsi host6: ahci Apr 21 10:21:35.336023 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 21 10:21:35.336234 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 21 10:21:35.336247 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 21 10:21:35.336257 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 21 10:21:35.336267 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 21 10:21:35.336277 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 21 10:21:35.336287 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 21 10:21:35.336297 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 21 10:21:35.336523 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:21:35.336704 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 21 10:21:35.336860 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 10:21:35.342185 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:21:35.342232 kernel: GPT:9289727 != 167739391 Apr 21 10:21:35.342244 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:21:35.342255 kernel: GPT:9289727 != 167739391 Apr 21 10:21:35.342265 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:21:35.342275 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:21:35.342285 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:21:35.463701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:35.481585 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:21:35.505939 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:35.652245 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.652369 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.652383 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.652394 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.654226 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.659238 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:21:35.708285 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (454) Apr 21 10:21:35.712239 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (446) Apr 21 10:21:35.719976 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 10:21:35.726257 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 10:21:35.732423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:21:35.737945 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 10:21:35.738973 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 10:21:35.746388 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:21:35.766363 disk-uuid[569]: Primary Header is updated. Apr 21 10:21:35.766363 disk-uuid[569]: Secondary Entries is updated. Apr 21 10:21:35.766363 disk-uuid[569]: Secondary Header is updated. Apr 21 10:21:35.772227 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:21:35.781221 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:21:36.782274 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:21:36.783224 disk-uuid[570]: The operation has completed successfully. Apr 21 10:21:36.836525 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:21:36.836665 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:21:36.852374 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:21:36.856664 sh[584]: Success Apr 21 10:21:36.874237 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:21:36.918723 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:21:36.935311 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:21:36.937512 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:21:36.958460 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:21:36.958494 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:36.961959 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:21:36.967509 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:21:36.967530 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:21:36.979245 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:21:36.982492 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:21:36.984493 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:21:36.991555 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:21:37.007515 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:21:37.030229 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:37.030288 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:37.034714 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:21:37.042415 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:21:37.042445 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:21:37.057735 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:21:37.062867 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:37.070473 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:21:37.078533 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:21:37.134613 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:21:37.144378 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:21:37.174472 ignition[703]: Ignition 2.19.0 Apr 21 10:21:37.174489 ignition[703]: Stage: fetch-offline Apr 21 10:21:37.174534 ignition[703]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:37.174546 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:37.174645 ignition[703]: parsed url from cmdline: "" Apr 21 10:21:37.179345 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:21:37.174650 ignition[703]: no config URL provided Apr 21 10:21:37.179671 systemd-networkd[766]: lo: Link UP Apr 21 10:21:37.174656 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:21:37.179676 systemd-networkd[766]: lo: Gained carrier Apr 21 10:21:37.174667 ignition[703]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:21:37.181845 systemd-networkd[766]: Enumeration completed Apr 21 10:21:37.174672 ignition[703]: failed to fetch config: resource requires networking Apr 21 10:21:37.182769 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:21:37.174848 ignition[703]: Ignition finished successfully Apr 21 10:21:37.182774 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:21:37.184004 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:21:37.185053 systemd-networkd[766]: eth0: Link UP Apr 21 10:21:37.185058 systemd-networkd[766]: eth0: Gained carrier Apr 21 10:21:37.185066 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:21:37.185451 systemd[1]: Reached target network.target - Network. Apr 21 10:21:37.192463 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:21:37.210277 ignition[773]: Ignition 2.19.0 Apr 21 10:21:37.210292 ignition[773]: Stage: fetch Apr 21 10:21:37.210592 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:37.210608 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:37.210750 ignition[773]: parsed url from cmdline: "" Apr 21 10:21:37.210756 ignition[773]: no config URL provided Apr 21 10:21:37.210763 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:21:37.210775 ignition[773]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:21:37.210803 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 21 10:21:37.211035 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:21:37.411260 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 21 10:21:37.411443 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:21:37.811854 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 21 10:21:37.812276 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:21:37.914282 systemd-networkd[766]: eth0: DHCPv4 address 172.234.196.117/24, gateway 172.234.196.1 acquired from 23.213.15.244 Apr 21 10:21:38.612455 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 21 10:21:38.709034 ignition[773]: PUT result: OK Apr 21 10:21:38.709747 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 21 10:21:38.818266 ignition[773]: GET result: OK Apr 21 10:21:38.818386 ignition[773]: parsing config with SHA512: ed25d5f9842617a76b51dbdc625804f2e529de8d467d45d7f6f10b1c3048dd23c4667dc3d54850a2495e518c80db4862fe6f82e08a8d1ef6b914fea3109e15f6 Apr 21 10:21:38.821689 unknown[773]: fetched base config from "system" Apr 21 10:21:38.821707 unknown[773]: fetched base config from "system" Apr 21 10:21:38.822026 ignition[773]: fetch: fetch complete Apr 21 10:21:38.821714 unknown[773]: fetched user config from "akamai" Apr 21 10:21:38.822032 ignition[773]: fetch: fetch passed Apr 21 10:21:38.824529 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:21:38.822072 ignition[773]: Ignition finished successfully Apr 21 10:21:38.830395 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:21:38.845129 ignition[781]: Ignition 2.19.0 Apr 21 10:21:38.845143 ignition[781]: Stage: kargs Apr 21 10:21:38.845506 ignition[781]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:38.845518 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:38.848947 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:21:38.846389 ignition[781]: kargs: kargs passed Apr 21 10:21:38.846440 ignition[781]: Ignition finished successfully Apr 21 10:21:38.864372 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:21:38.878834 ignition[787]: Ignition 2.19.0 Apr 21 10:21:38.878848 ignition[787]: Stage: disks Apr 21 10:21:38.879000 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:38.879015 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:38.881148 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:21:38.879688 ignition[787]: disks: disks passed Apr 21 10:21:38.905160 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:21:38.879729 ignition[787]: Ignition finished successfully Apr 21 10:21:38.906230 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:21:38.907692 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:21:38.909046 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:21:38.910655 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:21:38.917384 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:21:38.934518 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:21:38.938415 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:21:38.945518 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:21:39.036344 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:21:39.036951 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:21:39.038785 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:21:39.044264 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:21:39.047312 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:21:39.050144 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:21:39.050187 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:21:39.050981 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:21:39.061229 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Apr 21 10:21:39.065422 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:39.065446 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:39.066586 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:21:39.070610 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:21:39.077650 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:21:39.077683 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:21:39.079337 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:21:39.081698 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:21:39.129556 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:21:39.135359 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:21:39.140891 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:21:39.147624 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:21:39.158611 systemd-networkd[766]: eth0: Gained IPv6LL Apr 21 10:21:39.245916 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:21:39.255445 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:21:39.258399 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:21:39.267051 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:21:39.270934 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:39.290496 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:21:39.297831 ignition[921]: INFO : Ignition 2.19.0 Apr 21 10:21:39.297831 ignition[921]: INFO : Stage: mount Apr 21 10:21:39.301291 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:39.301291 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:39.301291 ignition[921]: INFO : mount: mount passed Apr 21 10:21:39.301291 ignition[921]: INFO : Ignition finished successfully Apr 21 10:21:39.302052 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:21:39.310336 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:21:40.050399 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:21:40.064272 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (933) Apr 21 10:21:40.069445 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:40.069479 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:40.072523 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:21:40.079784 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:21:40.079836 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:21:40.085588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:21:40.111936 ignition[950]: INFO : Ignition 2.19.0 Apr 21 10:21:40.113024 ignition[950]: INFO : Stage: files Apr 21 10:21:40.113918 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:40.114853 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:40.117252 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:21:40.119545 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:21:40.119545 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:21:40.122957 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:21:40.124410 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:21:40.125956 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:21:40.125117 unknown[950]: wrote ssh authorized keys file for user: core Apr 21 10:21:40.128446 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:21:40.129816 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:21:40.359777 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:21:40.395540 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:21:40.395540 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:21:40.398282 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 21 10:21:40.905583 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:21:41.161665 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 21 10:21:41.161665 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:21:41.166429 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:21:41.166429 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:21:41.166429 ignition[950]: INFO : files: files passed Apr 21 10:21:41.166429 ignition[950]: INFO : Ignition finished successfully Apr 21 10:21:41.166710 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:21:41.200358 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:21:41.203319 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:21:41.205529 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:21:41.205641 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:21:41.220314 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:21:41.222018 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:21:41.223732 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:21:41.225195 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:21:41.226473 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:21:41.239329 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:21:41.269082 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:21:41.269252 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:21:41.271028 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:21:41.272924 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:21:41.274620 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:21:41.280339 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:21:41.294719 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:21:41.300398 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:21:41.311569 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:21:41.312529 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:21:41.314428 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:21:41.315988 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:21:41.316115 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:21:41.319013 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:21:41.320076 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:21:41.321698 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:21:41.323124 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:21:41.324846 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:21:41.326498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:21:41.328229 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:21:41.330156 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:21:41.331861 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:21:41.333715 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:21:41.335149 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:21:41.335388 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:21:41.336997 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:21:41.338181 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:21:41.339847 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:21:41.339973 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:21:41.341567 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:21:41.341681 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:21:41.343814 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:21:41.343935 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:21:41.344926 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:21:41.345245 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:21:41.356391 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:21:41.357242 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:21:41.357395 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:21:41.364401 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:21:41.365900 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:21:41.366993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:21:41.367873 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:21:41.373313 ignition[1002]: INFO : Ignition 2.19.0 Apr 21 10:21:41.373313 ignition[1002]: INFO : Stage: umount Apr 21 10:21:41.373313 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:41.373313 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:21:41.373313 ignition[1002]: INFO : umount: umount passed Apr 21 10:21:41.373313 ignition[1002]: INFO : Ignition finished successfully Apr 21 10:21:41.367976 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:21:41.373730 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:21:41.373842 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:21:41.381413 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:21:41.381547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:21:41.384837 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:21:41.384891 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:21:41.385718 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:21:41.385770 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:21:41.387741 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:21:41.387794 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:21:41.391290 systemd[1]: Stopped target network.target - Network. Apr 21 10:21:41.392335 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:21:41.392393 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:21:41.395595 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:21:41.398198 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:21:41.424475 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:21:41.425597 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:21:41.427261 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:21:41.428975 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:21:41.429031 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:21:41.430823 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:21:41.430877 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:21:41.432453 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:21:41.432509 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:21:41.434143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:21:41.434194 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:21:41.436308 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:21:41.438380 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:21:41.441774 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:21:41.442670 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:21:41.442779 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:21:41.443423 systemd-networkd[766]: eth0: DHCPv6 lease lost Apr 21 10:21:41.445519 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:21:41.445607 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:21:41.447694 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:21:41.447833 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:21:41.451756 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:21:41.451917 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:21:41.454973 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:21:41.455018 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:21:41.462368 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:21:41.463590 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:21:41.463647 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:21:41.465274 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:21:41.465325 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:21:41.466963 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:21:41.467016 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:21:41.467776 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:21:41.467825 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:21:41.469593 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:21:41.485729 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:21:41.485908 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:21:41.488460 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:21:41.488569 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:21:41.490188 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:21:41.490282 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:21:41.491452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:21:41.491496 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:21:41.492866 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:21:41.492922 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:21:41.494992 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:21:41.495043 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:21:41.496467 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:21:41.496518 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:41.506465 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:21:41.507321 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:21:41.507379 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:21:41.511570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:21:41.511627 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:41.512941 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:21:41.513053 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:21:41.514877 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:21:41.523530 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:21:41.531370 systemd[1]: Switching root. Apr 21 10:21:41.563677 systemd-journald[178]: Journal stopped Apr 21 10:21:42.850719 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 21 10:21:42.850758 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:21:42.850778 kernel: SELinux: policy capability open_perms=1 Apr 21 10:21:42.850792 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:21:42.850813 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:21:42.850828 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:21:42.850844 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:21:42.850858 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:21:42.850873 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:21:42.850888 kernel: audit: type=1403 audit(1776766901.762:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:21:42.850905 systemd[1]: Successfully loaded SELinux policy in 54.224ms. Apr 21 10:21:42.850928 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.075ms. Apr 21 10:21:42.850945 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:21:42.850961 systemd[1]: Detected virtualization kvm. Apr 21 10:21:42.850978 systemd[1]: Detected architecture x86-64. Apr 21 10:21:42.850994 systemd[1]: Detected first boot. Apr 21 10:21:42.851016 systemd[1]: Initializing machine ID from random generator. Apr 21 10:21:42.851033 zram_generator::config[1045]: No configuration found. Apr 21 10:21:42.851051 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:21:42.851069 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:21:42.851087 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:21:42.851103 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:21:42.851123 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:21:42.851146 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:21:42.851162 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:21:42.851180 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:21:42.851196 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:21:42.851255 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:21:42.851275 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:21:42.851292 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:21:42.851315 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:21:42.851334 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:21:42.851351 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:21:42.851368 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:21:42.851385 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:21:42.851402 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:21:42.851418 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:21:42.851434 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:21:42.851456 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:21:42.851472 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:21:42.851495 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:21:42.851512 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:21:42.851553 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:21:42.851569 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:21:42.851585 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:21:42.851603 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:21:42.851625 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:21:42.851643 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:21:42.851659 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:21:42.851677 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:21:42.851694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:21:42.851716 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:21:42.851733 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:21:42.851750 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:21:42.851767 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:21:42.851784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:21:42.851802 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:21:42.851819 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:21:42.851836 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:21:42.851857 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:21:42.851874 systemd[1]: Reached target machines.target - Containers. Apr 21 10:21:42.851891 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:21:42.851907 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:21:42.851924 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:21:42.851942 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:21:42.851959 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:21:42.851977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:21:42.852001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:21:42.852019 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:21:42.852035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:21:42.852054 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:21:42.852071 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:21:42.852089 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:21:42.852107 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:21:42.852125 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:21:42.852149 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:21:42.852164 kernel: fuse: init (API version 7.39) Apr 21 10:21:42.852174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:21:42.852185 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:21:42.852195 kernel: ACPI: bus type drm_connector registered Apr 21 10:21:42.852230 kernel: loop: module loaded Apr 21 10:21:42.852250 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:21:42.852268 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:21:42.852286 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:21:42.852311 systemd[1]: Stopped verity-setup.service. Apr 21 10:21:42.852336 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:21:42.852354 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:21:42.852370 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:21:42.852416 systemd-journald[1128]: Collecting audit messages is disabled. Apr 21 10:21:42.852462 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:21:42.852481 systemd-journald[1128]: Journal started Apr 21 10:21:42.852512 systemd-journald[1128]: Runtime Journal (/run/log/journal/45ad16e749084106b3d3c55206c1ee00) is 8.0M, max 78.3M, 70.3M free. Apr 21 10:21:42.404976 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:21:42.424705 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 10:21:42.425180 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:21:42.856280 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:21:42.856701 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:21:42.857766 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:21:42.858755 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:21:42.859883 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:21:42.861044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:21:42.862327 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:21:42.862589 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:21:42.863755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:21:42.864005 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:21:42.865354 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:21:42.865606 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:21:42.866897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:21:42.867079 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:21:42.868496 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:21:42.868748 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:21:42.870017 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:21:42.870356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:21:42.871577 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:21:42.872732 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:21:42.874019 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:21:42.891771 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:21:42.903420 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:21:42.913048 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:21:42.935947 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:21:42.936078 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:21:42.938479 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:21:42.947568 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:21:42.956844 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:21:42.958513 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:21:42.962537 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:21:42.971736 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:21:42.973166 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:21:42.980861 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:21:42.982061 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:21:42.993397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:21:43.003304 systemd-journald[1128]: Time spent on flushing to /var/log/journal/45ad16e749084106b3d3c55206c1ee00 is 74.561ms for 970 entries. Apr 21 10:21:43.003304 systemd-journald[1128]: System Journal (/var/log/journal/45ad16e749084106b3d3c55206c1ee00) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:21:43.111382 systemd-journald[1128]: Received client request to flush runtime journal. Apr 21 10:21:43.111424 kernel: loop0: detected capacity change from 0 to 8 Apr 21 10:21:43.111449 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:21:42.999361 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:21:43.009361 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:21:43.012568 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:21:43.013650 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:21:43.015707 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:21:43.026222 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:21:43.054563 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:21:43.058444 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:21:43.062424 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:21:43.074530 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:21:43.114938 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:21:43.124726 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:21:43.134275 kernel: loop1: detected capacity change from 0 to 219192 Apr 21 10:21:43.127491 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:21:43.133127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:21:43.141685 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:21:43.166807 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:21:43.176425 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:21:43.203011 kernel: loop2: detected capacity change from 0 to 142488 Apr 21 10:21:43.215471 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Apr 21 10:21:43.217552 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Apr 21 10:21:43.239801 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:21:43.257243 kernel: loop3: detected capacity change from 0 to 140768 Apr 21 10:21:43.301244 kernel: loop4: detected capacity change from 0 to 8 Apr 21 10:21:43.304271 kernel: loop5: detected capacity change from 0 to 219192 Apr 21 10:21:43.323294 kernel: loop6: detected capacity change from 0 to 142488 Apr 21 10:21:43.343241 kernel: loop7: detected capacity change from 0 to 140768 Apr 21 10:21:43.363360 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 21 10:21:43.365502 (sd-merge)[1191]: Merged extensions into '/usr'. Apr 21 10:21:43.369637 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:21:43.369727 systemd[1]: Reloading... Apr 21 10:21:43.475478 zram_generator::config[1217]: No configuration found. Apr 21 10:21:43.559086 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:21:43.634714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:21:43.681625 systemd[1]: Reloading finished in 311 ms. Apr 21 10:21:43.722444 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:21:43.723901 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:21:43.725012 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:21:43.735485 systemd[1]: Starting ensure-sysext.service... Apr 21 10:21:43.739365 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:21:43.749371 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:21:43.756360 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:21:43.756381 systemd[1]: Reloading... Apr 21 10:21:43.783810 systemd-udevd[1263]: Using default interface naming scheme 'v255'. Apr 21 10:21:43.793456 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:21:43.793814 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:21:43.794990 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:21:43.796776 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Apr 21 10:21:43.796871 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Apr 21 10:21:43.803535 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:21:43.803549 systemd-tmpfiles[1262]: Skipping /boot Apr 21 10:21:43.816571 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:21:43.816589 systemd-tmpfiles[1262]: Skipping /boot Apr 21 10:21:43.861261 zram_generator::config[1290]: No configuration found. Apr 21 10:21:44.004380 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:21:44.054264 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1351) Apr 21 10:21:44.076487 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:21:44.076990 systemd[1]: Reloading finished in 320 ms. Apr 21 10:21:44.097010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:21:44.100233 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:21:44.116646 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:21:44.168576 kernel: EDAC MC: Ver: 3.0.0 Apr 21 10:21:44.168652 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:21:44.174356 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:21:44.174647 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:21:44.180246 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 21 10:21:44.186230 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:21:44.197072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:21:44.202740 systemd[1]: Finished ensure-sysext.service. Apr 21 10:21:44.220780 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:21:44.227794 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:21:44.233592 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:21:44.234857 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:21:44.236779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:21:44.243441 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:21:44.249193 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:21:44.248440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:21:44.253475 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:21:44.254671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:21:44.258414 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:21:44.263460 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:21:44.271449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:21:44.281447 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:21:44.305040 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:21:44.309125 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:21:44.310137 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:21:44.312728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:21:44.313644 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:21:44.316151 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:21:44.316733 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:21:44.318719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:21:44.319450 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:21:44.323865 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:21:44.324038 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:21:44.335552 augenrules[1395]: No rules Apr 21 10:21:44.338646 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:21:44.351883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:21:44.352035 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:21:44.364411 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:21:44.368411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:44.371301 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:21:44.372980 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:21:44.375685 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:21:44.376952 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:21:44.394376 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:21:44.400362 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:21:44.417335 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:21:44.434493 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:21:44.437067 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:21:44.443513 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:21:44.445748 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:21:44.448002 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:21:44.451076 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:21:44.469888 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:21:44.488237 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:21:44.515892 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:21:44.613642 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:21:44.616613 systemd-networkd[1382]: lo: Link UP Apr 21 10:21:44.616960 systemd-networkd[1382]: lo: Gained carrier Apr 21 10:21:44.620325 systemd-timesyncd[1384]: No network connectivity, watching for changes. Apr 21 10:21:44.621514 systemd-networkd[1382]: Enumeration completed Apr 21 10:21:44.622045 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:21:44.622050 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:21:44.623126 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:21:44.624702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:44.625508 systemd-networkd[1382]: eth0: Link UP Apr 21 10:21:44.625518 systemd-networkd[1382]: eth0: Gained carrier Apr 21 10:21:44.625531 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:21:44.626640 systemd-resolved[1383]: Positive Trust Anchors: Apr 21 10:21:44.626770 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:21:44.626933 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:21:44.627005 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:21:44.632348 systemd-resolved[1383]: Defaulting to hostname 'linux'. Apr 21 10:21:44.634612 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:21:44.635670 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:21:44.636727 systemd[1]: Reached target network.target - Network. Apr 21 10:21:44.637591 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:21:44.638388 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:21:44.639468 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:21:44.640544 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:21:44.641693 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:21:44.642621 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:21:44.643525 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:21:44.644488 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:21:44.644524 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:21:44.645432 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:21:44.647096 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:21:44.649555 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:21:44.656016 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:21:44.657532 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:21:44.658378 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:21:44.659255 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:21:44.660105 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:21:44.660141 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:21:44.661496 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:21:44.667376 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:21:44.676422 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:21:44.679331 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:21:44.692622 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:21:44.693924 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:21:44.700216 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:21:44.704362 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:21:44.707384 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:21:44.710329 jq[1439]: false Apr 21 10:21:44.714840 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:21:44.726640 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:21:44.728060 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:21:44.729688 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:21:44.732550 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:21:44.737474 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:21:44.742023 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:21:44.742858 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:21:44.743276 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:21:44.744500 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:21:44.764860 extend-filesystems[1440]: Found loop4 Apr 21 10:21:44.765944 extend-filesystems[1440]: Found loop5 Apr 21 10:21:44.767144 extend-filesystems[1440]: Found loop6 Apr 21 10:21:44.767144 extend-filesystems[1440]: Found loop7 Apr 21 10:21:44.767144 extend-filesystems[1440]: Found sda Apr 21 10:21:44.767144 extend-filesystems[1440]: Found sda1 Apr 21 10:21:44.767144 extend-filesystems[1440]: Found sda2 Apr 21 10:21:44.767144 extend-filesystems[1440]: Found sda3 Apr 21 10:21:44.767144 extend-filesystems[1440]: Found usr Apr 21 10:21:44.767144 extend-filesystems[1440]: Found sda4 Apr 21 10:21:44.767144 extend-filesystems[1440]: Found sda6 Apr 21 10:21:44.767144 extend-filesystems[1440]: Found sda7 Apr 21 10:21:44.767144 extend-filesystems[1440]: Found sda9 Apr 21 10:21:44.767144 extend-filesystems[1440]: Checking size of /dev/sda9 Apr 21 10:21:44.794545 (ntainerd)[1457]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:21:44.800804 dbus-daemon[1438]: [system] SELinux support is enabled Apr 21 10:21:44.802390 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:21:44.808993 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:21:44.810013 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:21:44.812304 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:21:44.812335 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:21:44.816489 coreos-metadata[1437]: Apr 21 10:21:44.816 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 21 10:21:44.816727 update_engine[1449]: I20260421 10:21:44.816157 1449 main.cc:92] Flatcar Update Engine starting Apr 21 10:21:44.826243 jq[1450]: true Apr 21 10:21:44.835376 update_engine[1449]: I20260421 10:21:44.834117 1449 update_check_scheduler.cc:74] Next update check in 5m39s Apr 21 10:21:44.836428 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:21:44.842782 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:21:44.845638 extend-filesystems[1440]: Resized partition /dev/sda9 Apr 21 10:21:44.854656 extend-filesystems[1477]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:21:44.868285 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 21 10:21:44.883545 jq[1471]: true Apr 21 10:21:44.886973 tar[1462]: linux-amd64/LICENSE Apr 21 10:21:44.886973 tar[1462]: linux-amd64/helm Apr 21 10:21:44.901292 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1293) Apr 21 10:21:44.904366 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:21:44.904652 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:21:45.052788 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:21:45.052821 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:21:45.055825 systemd-logind[1447]: New seat seat0. Apr 21 10:21:45.061036 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:21:45.070431 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:21:45.069935 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:21:45.090677 systemd[1]: Starting sshkeys.service... Apr 21 10:21:45.116970 containerd[1457]: time="2026-04-21T10:21:45.116699520Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:21:45.131625 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:21:45.141481 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:21:45.155215 containerd[1457]: time="2026-04-21T10:21:45.155170600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:45.155873 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:21:45.163000 containerd[1457]: time="2026-04-21T10:21:45.161581000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:21:45.163000 containerd[1457]: time="2026-04-21T10:21:45.161620610Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:21:45.163000 containerd[1457]: time="2026-04-21T10:21:45.161638670Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:21:45.163000 containerd[1457]: time="2026-04-21T10:21:45.162388960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:21:45.163000 containerd[1457]: time="2026-04-21T10:21:45.162407520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:45.163000 containerd[1457]: time="2026-04-21T10:21:45.162764430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:21:45.163000 containerd[1457]: time="2026-04-21T10:21:45.162778590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:45.167690 containerd[1457]: time="2026-04-21T10:21:45.167432960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:21:45.167690 containerd[1457]: time="2026-04-21T10:21:45.167464740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:45.167690 containerd[1457]: time="2026-04-21T10:21:45.167485010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:21:45.167690 containerd[1457]: time="2026-04-21T10:21:45.167498220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:45.167690 containerd[1457]: time="2026-04-21T10:21:45.167627860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:45.167919 containerd[1457]: time="2026-04-21T10:21:45.167897350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:45.168054 containerd[1457]: time="2026-04-21T10:21:45.168033970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:21:45.168087 containerd[1457]: time="2026-04-21T10:21:45.168052860Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:21:45.168165 containerd[1457]: time="2026-04-21T10:21:45.168145730Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:21:45.171434 containerd[1457]: time="2026-04-21T10:21:45.171310380Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:21:45.171365 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:21:45.187239 containerd[1457]: time="2026-04-21T10:21:45.186930020Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:21:45.187239 containerd[1457]: time="2026-04-21T10:21:45.186978040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:21:45.187239 containerd[1457]: time="2026-04-21T10:21:45.186993680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:21:45.187239 containerd[1457]: time="2026-04-21T10:21:45.187007500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:21:45.187239 containerd[1457]: time="2026-04-21T10:21:45.187065750Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:21:45.187393 containerd[1457]: time="2026-04-21T10:21:45.187311770Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:21:45.187538 containerd[1457]: time="2026-04-21T10:21:45.187508720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:21:45.187656 containerd[1457]: time="2026-04-21T10:21:45.187627320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:21:45.187656 containerd[1457]: time="2026-04-21T10:21:45.187651980Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:21:45.187714 containerd[1457]: time="2026-04-21T10:21:45.187665790Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:21:45.187714 containerd[1457]: time="2026-04-21T10:21:45.187678050Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:21:45.187714 containerd[1457]: time="2026-04-21T10:21:45.187691280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:21:45.187714 containerd[1457]: time="2026-04-21T10:21:45.187701680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:21:45.187714 containerd[1457]: time="2026-04-21T10:21:45.187713980Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:21:45.187829 containerd[1457]: time="2026-04-21T10:21:45.187727190Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:21:45.187829 containerd[1457]: time="2026-04-21T10:21:45.187740340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:21:45.187829 containerd[1457]: time="2026-04-21T10:21:45.187752080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:21:45.187829 containerd[1457]: time="2026-04-21T10:21:45.187761930Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.187964100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.187993330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188005120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188016530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188028010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188039110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188050440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188061420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188071830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188084700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188094940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188105550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188116090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188129190Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:21:45.189239 containerd[1457]: time="2026-04-21T10:21:45.188147260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189590 containerd[1457]: time="2026-04-21T10:21:45.188157960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189590 containerd[1457]: time="2026-04-21T10:21:45.188168040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:21:45.189590 containerd[1457]: time="2026-04-21T10:21:45.188242330Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:21:45.189590 containerd[1457]: time="2026-04-21T10:21:45.188259260Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:21:45.189590 containerd[1457]: time="2026-04-21T10:21:45.188270200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:21:45.189590 containerd[1457]: time="2026-04-21T10:21:45.188280640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:21:45.189590 containerd[1457]: time="2026-04-21T10:21:45.188289670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189590 containerd[1457]: time="2026-04-21T10:21:45.188300030Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:21:45.189590 containerd[1457]: time="2026-04-21T10:21:45.188313830Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:21:45.189590 containerd[1457]: time="2026-04-21T10:21:45.188325890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:21:45.189829 containerd[1457]: time="2026-04-21T10:21:45.188542690Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:21:45.189829 containerd[1457]: time="2026-04-21T10:21:45.188591890Z" level=info msg="Connect containerd service" Apr 21 10:21:45.189829 containerd[1457]: time="2026-04-21T10:21:45.188626180Z" level=info msg="using legacy CRI server" Apr 21 10:21:45.189829 containerd[1457]: time="2026-04-21T10:21:45.188632920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:21:45.189829 containerd[1457]: time="2026-04-21T10:21:45.188713330Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:21:45.194155 containerd[1457]: time="2026-04-21T10:21:45.194119960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:21:45.194535 containerd[1457]: time="2026-04-21T10:21:45.194462890Z" level=info msg="Start subscribing containerd event" Apr 21 10:21:45.194569 containerd[1457]: time="2026-04-21T10:21:45.194540510Z" level=info msg="Start recovering state" Apr 21 10:21:45.195132 containerd[1457]: time="2026-04-21T10:21:45.194597880Z" level=info msg="Start event monitor" Apr 21 10:21:45.195132 containerd[1457]: time="2026-04-21T10:21:45.195126300Z" level=info msg="Start snapshots syncer" Apr 21 10:21:45.195190 containerd[1457]: time="2026-04-21T10:21:45.195139920Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:21:45.195190 containerd[1457]: time="2026-04-21T10:21:45.195147940Z" level=info msg="Start streaming server" Apr 21 10:21:45.196580 containerd[1457]: time="2026-04-21T10:21:45.196551760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:21:45.197971 containerd[1457]: time="2026-04-21T10:21:45.196828600Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:21:45.200234 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:21:45.201183 containerd[1457]: time="2026-04-21T10:21:45.201148300Z" level=info msg="containerd successfully booted in 0.085587s" Apr 21 10:21:45.202970 coreos-metadata[1505]: Apr 21 10:21:45.202 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 21 10:21:45.210337 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:21:45.219574 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:21:45.227894 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:21:45.228339 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:21:45.240982 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:21:45.250855 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:21:45.262783 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:21:45.272663 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:21:45.275431 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 21 10:21:45.274566 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:21:45.288036 extend-filesystems[1477]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 21 10:21:45.288036 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 21 10:21:45.288036 extend-filesystems[1477]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 21 10:21:45.290944 extend-filesystems[1440]: Resized filesystem in /dev/sda9 Apr 21 10:21:45.291662 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:21:45.291992 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:21:45.371474 systemd-networkd[1382]: eth0: DHCPv4 address 172.234.196.117/24, gateway 172.234.196.1 acquired from 23.213.15.244 Apr 21 10:21:45.371598 dbus-daemon[1438]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1382 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 21 10:21:45.373974 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Apr 21 10:21:45.382511 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 21 10:21:45.456321 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 21 10:21:45.456561 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 21 10:21:45.457718 dbus-daemon[1438]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1534 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 21 10:21:45.469080 systemd[1]: Starting polkit.service - Authorization Manager... Apr 21 10:21:45.479767 polkitd[1535]: Started polkitd version 121 Apr 21 10:21:45.483804 polkitd[1535]: Loading rules from directory /etc/polkit-1/rules.d Apr 21 10:21:45.483861 polkitd[1535]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 21 10:21:45.484526 polkitd[1535]: Finished loading, compiling and executing 2 rules Apr 21 10:21:45.485495 dbus-daemon[1438]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 21 10:21:45.485629 systemd[1]: Started polkit.service - Authorization Manager. Apr 21 10:21:45.509615 polkitd[1535]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 21 10:21:45.523331 systemd-hostnamed[1534]: Hostname set to <172-234-196-117> (transient) Apr 21 10:21:45.523635 systemd-resolved[1383]: System hostname changed to '172-234-196-117'. Apr 21 10:21:46.665651 systemd-resolved[1383]: Clock change detected. Flushing caches. Apr 21 10:21:46.666235 systemd-timesyncd[1384]: Contacted time server 172.104.28.175:123 (3.flatcar.pool.ntp.org). Apr 21 10:21:46.666300 systemd-timesyncd[1384]: Initial clock synchronization to Tue 2026-04-21 10:21:46.665587 UTC. Apr 21 10:21:46.691868 tar[1462]: linux-amd64/README.md Apr 21 10:21:46.705366 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:21:46.872717 coreos-metadata[1437]: Apr 21 10:21:46.872 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 21 10:21:46.964609 coreos-metadata[1437]: Apr 21 10:21:46.964 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 21 10:21:47.155716 coreos-metadata[1437]: Apr 21 10:21:47.155 INFO Fetch successful Apr 21 10:21:47.155987 coreos-metadata[1437]: Apr 21 10:21:47.155 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 21 10:21:47.180365 systemd-networkd[1382]: eth0: Gained IPv6LL Apr 21 10:21:47.184105 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:21:47.185647 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:21:47.195254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:21:47.198180 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:21:47.221461 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:21:47.258979 coreos-metadata[1505]: Apr 21 10:21:47.258 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 21 10:21:47.351624 coreos-metadata[1505]: Apr 21 10:21:47.350 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 21 10:21:47.510376 coreos-metadata[1505]: Apr 21 10:21:47.510 INFO Fetch successful Apr 21 10:21:47.527299 update-ssh-keys[1563]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:21:47.528394 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:21:47.531547 systemd[1]: Finished sshkeys.service. Apr 21 10:21:47.535124 coreos-metadata[1437]: Apr 21 10:21:47.535 INFO Fetch successful Apr 21 10:21:47.630246 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:21:47.632589 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:21:48.075129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:21:48.076514 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:21:48.080356 systemd[1]: Startup finished in 1.056s (kernel) + 8.036s (initrd) + 5.326s (userspace) = 14.418s. Apr 21 10:21:48.116398 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:21:48.584353 kubelet[1590]: E0421 10:21:48.584075 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:21:48.588304 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:21:48.588575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:21:49.463791 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:21:49.465399 systemd[1]: Started sshd@0-172.234.196.117:22-50.85.169.122:48168.service - OpenSSH per-connection server daemon (50.85.169.122:48168). Apr 21 10:21:50.095727 sshd[1602]: Accepted publickey for core from 50.85.169.122 port 48168 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:21:50.097340 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:50.107815 systemd-logind[1447]: New session 1 of user core. Apr 21 10:21:50.109041 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:21:50.114254 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:21:50.129858 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:21:50.136458 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:21:50.140905 (systemd)[1606]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:21:50.234239 systemd[1606]: Queued start job for default target default.target. Apr 21 10:21:50.241316 systemd[1606]: Created slice app.slice - User Application Slice. Apr 21 10:21:50.241346 systemd[1606]: Reached target paths.target - Paths. Apr 21 10:21:50.241360 systemd[1606]: Reached target timers.target - Timers. Apr 21 10:21:50.242998 systemd[1606]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:21:50.255350 systemd[1606]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:21:50.256005 systemd[1606]: Reached target sockets.target - Sockets. Apr 21 10:21:50.256029 systemd[1606]: Reached target basic.target - Basic System. Apr 21 10:21:50.256095 systemd[1606]: Reached target default.target - Main User Target. Apr 21 10:21:50.256135 systemd[1606]: Startup finished in 108ms. Apr 21 10:21:50.256429 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:21:50.257926 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:21:50.719194 systemd[1]: Started sshd@1-172.234.196.117:22-50.85.169.122:57890.service - OpenSSH per-connection server daemon (50.85.169.122:57890). Apr 21 10:21:51.348012 sshd[1617]: Accepted publickey for core from 50.85.169.122 port 57890 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:21:51.349876 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:51.354837 systemd-logind[1447]: New session 2 of user core. Apr 21 10:21:51.361413 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:21:51.798735 sshd[1617]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:51.804565 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:21:51.805819 systemd[1]: sshd@1-172.234.196.117:22-50.85.169.122:57890.service: Deactivated successfully. Apr 21 10:21:51.809005 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:21:51.810194 systemd-logind[1447]: Removed session 2. Apr 21 10:21:51.912659 systemd[1]: Started sshd@2-172.234.196.117:22-50.85.169.122:57892.service - OpenSSH per-connection server daemon (50.85.169.122:57892). Apr 21 10:21:52.540209 sshd[1624]: Accepted publickey for core from 50.85.169.122 port 57892 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:21:52.542501 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:52.547027 systemd-logind[1447]: New session 3 of user core. Apr 21 10:21:52.558252 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:21:52.981809 sshd[1624]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:52.985550 systemd[1]: sshd@2-172.234.196.117:22-50.85.169.122:57892.service: Deactivated successfully. Apr 21 10:21:52.987644 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:21:52.988238 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:21:52.989278 systemd-logind[1447]: Removed session 3. Apr 21 10:21:53.096343 systemd[1]: Started sshd@3-172.234.196.117:22-50.85.169.122:57896.service - OpenSSH per-connection server daemon (50.85.169.122:57896). Apr 21 10:21:53.718710 sshd[1631]: Accepted publickey for core from 50.85.169.122 port 57896 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:21:53.719523 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:53.725299 systemd-logind[1447]: New session 4 of user core. Apr 21 10:21:53.731227 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:21:54.166523 sshd[1631]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:54.171445 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:21:54.172134 systemd[1]: sshd@3-172.234.196.117:22-50.85.169.122:57896.service: Deactivated successfully. Apr 21 10:21:54.174192 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:21:54.175145 systemd-logind[1447]: Removed session 4. Apr 21 10:21:54.284360 systemd[1]: Started sshd@4-172.234.196.117:22-50.85.169.122:57904.service - OpenSSH per-connection server daemon (50.85.169.122:57904). Apr 21 10:21:54.912429 sshd[1638]: Accepted publickey for core from 50.85.169.122 port 57904 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:21:54.915365 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:54.921903 systemd-logind[1447]: New session 5 of user core. Apr 21 10:21:54.929360 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:21:55.275761 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:21:55.276301 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:21:55.296515 sudo[1641]: pam_unix(sudo:session): session closed for user root Apr 21 10:21:55.404480 sshd[1638]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:55.414210 systemd[1]: sshd@4-172.234.196.117:22-50.85.169.122:57904.service: Deactivated successfully. Apr 21 10:21:55.415658 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:21:55.418926 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:21:55.421518 systemd-logind[1447]: Removed session 5. Apr 21 10:21:55.517280 systemd[1]: Started sshd@5-172.234.196.117:22-50.85.169.122:57918.service - OpenSSH per-connection server daemon (50.85.169.122:57918). Apr 21 10:21:56.119722 sshd[1646]: Accepted publickey for core from 50.85.169.122 port 57918 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:21:56.121485 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:56.125999 systemd-logind[1447]: New session 6 of user core. Apr 21 10:21:56.132172 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:21:56.455174 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:21:56.455521 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:21:56.459158 sudo[1650]: pam_unix(sudo:session): session closed for user root Apr 21 10:21:56.465472 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:21:56.465807 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:21:56.478291 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:21:56.481810 auditctl[1653]: No rules Apr 21 10:21:56.482336 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:21:56.482567 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:21:56.491543 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:21:56.516468 augenrules[1671]: No rules Apr 21 10:21:56.518703 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:21:56.520586 sudo[1649]: pam_unix(sudo:session): session closed for user root Apr 21 10:21:56.617738 sshd[1646]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:56.621599 systemd[1]: sshd@5-172.234.196.117:22-50.85.169.122:57918.service: Deactivated successfully. Apr 21 10:21:56.623372 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:21:56.623950 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:21:56.624786 systemd-logind[1447]: Removed session 6. Apr 21 10:21:56.726873 systemd[1]: Started sshd@6-172.234.196.117:22-50.85.169.122:57934.service - OpenSSH per-connection server daemon (50.85.169.122:57934). Apr 21 10:21:57.357889 sshd[1679]: Accepted publickey for core from 50.85.169.122 port 57934 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:21:57.358568 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:57.363388 systemd-logind[1447]: New session 7 of user core. Apr 21 10:21:57.367190 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:21:57.703400 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:21:57.703753 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:21:57.980484 (dockerd)[1698]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:21:57.980550 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:21:58.235972 dockerd[1698]: time="2026-04-21T10:21:58.235420508Z" level=info msg="Starting up" Apr 21 10:21:58.302300 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2197246453-merged.mount: Deactivated successfully. Apr 21 10:21:58.327667 dockerd[1698]: time="2026-04-21T10:21:58.327624638Z" level=info msg="Loading containers: start." Apr 21 10:21:58.442079 kernel: Initializing XFRM netlink socket Apr 21 10:21:58.529844 systemd-networkd[1382]: docker0: Link UP Apr 21 10:21:58.542458 dockerd[1698]: time="2026-04-21T10:21:58.542405018Z" level=info msg="Loading containers: done." Apr 21 10:21:58.555801 dockerd[1698]: time="2026-04-21T10:21:58.555721848Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:21:58.555749 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck444213774-merged.mount: Deactivated successfully. Apr 21 10:21:58.556263 dockerd[1698]: time="2026-04-21T10:21:58.555802098Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:21:58.556263 dockerd[1698]: time="2026-04-21T10:21:58.555901168Z" level=info msg="Daemon has completed initialization" Apr 21 10:21:58.581231 dockerd[1698]: time="2026-04-21T10:21:58.581177398Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:21:58.581348 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:21:58.838721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:21:58.845236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:21:59.017096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:21:59.030601 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:21:59.086974 containerd[1457]: time="2026-04-21T10:21:59.086453278Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 21 10:21:59.087515 kubelet[1846]: E0421 10:21:59.086900 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:21:59.094549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:21:59.094763 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:21:59.677965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4244477183.mount: Deactivated successfully. Apr 21 10:22:00.754466 containerd[1457]: time="2026-04-21T10:22:00.754378368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:00.755731 containerd[1457]: time="2026-04-21T10:22:00.755570888Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100520" Apr 21 10:22:00.756420 containerd[1457]: time="2026-04-21T10:22:00.756344618Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:00.760874 containerd[1457]: time="2026-04-21T10:22:00.759244818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:00.760874 containerd[1457]: time="2026-04-21T10:22:00.760637128Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 1.67413667s" Apr 21 10:22:00.760874 containerd[1457]: time="2026-04-21T10:22:00.760700158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 21 10:22:00.763631 containerd[1457]: time="2026-04-21T10:22:00.763570438Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 21 10:22:02.032230 containerd[1457]: time="2026-04-21T10:22:02.032150848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:02.033476 containerd[1457]: time="2026-04-21T10:22:02.033285278Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252744" Apr 21 10:22:02.034365 containerd[1457]: time="2026-04-21T10:22:02.034030638Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:02.042225 containerd[1457]: time="2026-04-21T10:22:02.042161028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:02.045121 containerd[1457]: time="2026-04-21T10:22:02.045076388Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.28143465s" Apr 21 10:22:02.045246 containerd[1457]: time="2026-04-21T10:22:02.045220928Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 21 10:22:02.046172 containerd[1457]: time="2026-04-21T10:22:02.046135598Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 21 10:22:03.131617 containerd[1457]: time="2026-04-21T10:22:03.130410108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:03.131617 containerd[1457]: time="2026-04-21T10:22:03.131281288Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810897" Apr 21 10:22:03.131617 containerd[1457]: time="2026-04-21T10:22:03.131568558Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:03.137120 containerd[1457]: time="2026-04-21T10:22:03.137086998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:03.139348 containerd[1457]: time="2026-04-21T10:22:03.139301698Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.09302384s" Apr 21 10:22:03.139348 containerd[1457]: time="2026-04-21T10:22:03.139337938Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 21 10:22:03.140364 containerd[1457]: time="2026-04-21T10:22:03.140003498Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 21 10:22:04.148133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492568706.mount: Deactivated successfully. Apr 21 10:22:04.398088 containerd[1457]: time="2026-04-21T10:22:04.397019738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:04.399237 containerd[1457]: time="2026-04-21T10:22:04.398857568Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972960" Apr 21 10:22:04.400849 containerd[1457]: time="2026-04-21T10:22:04.399801488Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:04.402031 containerd[1457]: time="2026-04-21T10:22:04.402009308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:04.403193 containerd[1457]: time="2026-04-21T10:22:04.403170938Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.26313464s" Apr 21 10:22:04.403517 containerd[1457]: time="2026-04-21T10:22:04.403486238Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 21 10:22:04.404331 containerd[1457]: time="2026-04-21T10:22:04.404190578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 21 10:22:04.951527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801204815.mount: Deactivated successfully. Apr 21 10:22:05.704668 containerd[1457]: time="2026-04-21T10:22:05.704615888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:05.705867 containerd[1457]: time="2026-04-21T10:22:05.705835148Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Apr 21 10:22:05.706503 containerd[1457]: time="2026-04-21T10:22:05.706454468Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:05.709977 containerd[1457]: time="2026-04-21T10:22:05.709025988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:05.710788 containerd[1457]: time="2026-04-21T10:22:05.710760668Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.30654509s" Apr 21 10:22:05.710836 containerd[1457]: time="2026-04-21T10:22:05.710791448Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 21 10:22:05.711563 containerd[1457]: time="2026-04-21T10:22:05.711534728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:22:06.229925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530108715.mount: Deactivated successfully. Apr 21 10:22:06.234560 containerd[1457]: time="2026-04-21T10:22:06.234480908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:06.235197 containerd[1457]: time="2026-04-21T10:22:06.235166078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Apr 21 10:22:06.235898 containerd[1457]: time="2026-04-21T10:22:06.235625738Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:06.237494 containerd[1457]: time="2026-04-21T10:22:06.237471108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:06.238225 containerd[1457]: time="2026-04-21T10:22:06.238195458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 526.6328ms" Apr 21 10:22:06.238280 containerd[1457]: time="2026-04-21T10:22:06.238227918Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 10:22:06.238822 containerd[1457]: time="2026-04-21T10:22:06.238800368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 21 10:22:06.771216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103369994.mount: Deactivated successfully. Apr 21 10:22:07.497186 containerd[1457]: time="2026-04-21T10:22:07.497115168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:07.498711 containerd[1457]: time="2026-04-21T10:22:07.498514838Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874823" Apr 21 10:22:07.499770 containerd[1457]: time="2026-04-21T10:22:07.499428438Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:07.502815 containerd[1457]: time="2026-04-21T10:22:07.502161258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:07.503624 containerd[1457]: time="2026-04-21T10:22:07.503592138Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.2647647s" Apr 21 10:22:07.503659 containerd[1457]: time="2026-04-21T10:22:07.503625538Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 21 10:22:09.345241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:22:09.353232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:22:09.529238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:09.544353 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:22:09.629847 kubelet[2074]: E0421 10:22:09.629668 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:22:09.634361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:22:09.634579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:22:11.472611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:11.477529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:22:11.519787 systemd[1]: Reloading requested from client PID 2089 ('systemctl') (unit session-7.scope)... Apr 21 10:22:11.519969 systemd[1]: Reloading... Apr 21 10:22:11.666072 zram_generator::config[2132]: No configuration found. Apr 21 10:22:11.766529 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:22:11.836703 systemd[1]: Reloading finished in 315 ms. Apr 21 10:22:11.895271 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:22:11.895502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:11.903311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:22:12.047921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:12.056411 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:22:12.092303 kubelet[2185]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:22:12.094271 kubelet[2185]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:22:12.094271 kubelet[2185]: I0421 10:22:12.092553 2185 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:22:12.462211 kubelet[2185]: I0421 10:22:12.462181 2185 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 10:22:12.464113 kubelet[2185]: I0421 10:22:12.462317 2185 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:22:12.464113 kubelet[2185]: I0421 10:22:12.462348 2185 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:22:12.464113 kubelet[2185]: I0421 10:22:12.462359 2185 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:22:12.464113 kubelet[2185]: I0421 10:22:12.462545 2185 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:22:12.466917 kubelet[2185]: E0421 10:22:12.466888 2185 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.234.196.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.196.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:22:12.467273 kubelet[2185]: I0421 10:22:12.467247 2185 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:22:12.473667 kubelet[2185]: E0421 10:22:12.473507 2185 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:22:12.473732 kubelet[2185]: I0421 10:22:12.473697 2185 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:22:12.476884 kubelet[2185]: I0421 10:22:12.476870 2185 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:22:12.478122 kubelet[2185]: I0421 10:22:12.478088 2185 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:22:12.478251 kubelet[2185]: I0421 10:22:12.478116 2185 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-196-117","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:22:12.478251 kubelet[2185]: I0421 10:22:12.478244 2185 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:22:12.478356 kubelet[2185]: I0421 10:22:12.478254 2185 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 10:22:12.478356 kubelet[2185]: I0421 10:22:12.478340 2185 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:22:12.479695 kubelet[2185]: I0421 10:22:12.479674 2185 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:22:12.479836 kubelet[2185]: I0421 10:22:12.479820 2185 kubelet.go:475] "Attempting to sync node with API server" Apr 21 10:22:12.479836 kubelet[2185]: I0421 10:22:12.479835 2185 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:22:12.479895 kubelet[2185]: I0421 10:22:12.479857 2185 kubelet.go:387] "Adding apiserver pod source" Apr 21 10:22:12.479895 kubelet[2185]: I0421 10:22:12.479871 2185 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:22:12.482224 kubelet[2185]: I0421 10:22:12.481713 2185 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:22:12.482224 kubelet[2185]: I0421 10:22:12.482182 2185 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:22:12.482224 kubelet[2185]: I0421 10:22:12.482206 2185 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:22:12.482319 kubelet[2185]: W0421 10:22:12.482295 2185 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:22:12.485775 kubelet[2185]: I0421 10:22:12.485269 2185 server.go:1262] "Started kubelet" Apr 21 10:22:12.485775 kubelet[2185]: E0421 10:22:12.485407 2185 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.196.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.196.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:22:12.485775 kubelet[2185]: E0421 10:22:12.485482 2185 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.196.117:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-196-117&limit=500&resourceVersion=0\": dial tcp 172.234.196.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:22:12.486559 kubelet[2185]: I0421 10:22:12.486455 2185 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:22:12.487843 kubelet[2185]: I0421 10:22:12.487820 2185 server.go:310] "Adding debug handlers to kubelet server" Apr 21 10:22:12.488999 kubelet[2185]: I0421 10:22:12.488504 2185 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:22:12.488999 kubelet[2185]: I0421 10:22:12.488550 2185 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:22:12.488999 kubelet[2185]: I0421 10:22:12.488792 2185 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:22:12.490623 kubelet[2185]: E0421 10:22:12.488938 2185 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.196.117:6443/api/v1/namespaces/default/events\": dial tcp 172.234.196.117:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-196-117.18a8581c8371a04c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-196-117,UID:172-234-196-117,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-196-117,},FirstTimestamp:2026-04-21 10:22:12.485251148 +0000 UTC m=+0.425049301,LastTimestamp:2026-04-21 10:22:12.485251148 +0000 UTC m=+0.425049301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-196-117,}" Apr 21 10:22:12.492253 kubelet[2185]: I0421 10:22:12.492119 2185 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:22:12.492290 kubelet[2185]: I0421 10:22:12.492279 2185 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:22:12.495166 kubelet[2185]: E0421 10:22:12.495140 2185 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:22:12.495608 kubelet[2185]: E0421 10:22:12.495586 2185 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-234-196-117\" not found" Apr 21 10:22:12.495652 kubelet[2185]: I0421 10:22:12.495613 2185 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 10:22:12.495771 kubelet[2185]: I0421 10:22:12.495750 2185 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:22:12.495812 kubelet[2185]: I0421 10:22:12.495797 2185 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:22:12.496359 kubelet[2185]: I0421 10:22:12.496336 2185 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:22:12.496421 kubelet[2185]: I0421 10:22:12.496400 2185 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:22:12.496837 kubelet[2185]: E0421 10:22:12.496811 2185 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.196.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.196.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:22:12.497699 kubelet[2185]: I0421 10:22:12.497675 2185 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:22:12.506220 kubelet[2185]: E0421 10:22:12.506186 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.196.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-196-117?timeout=10s\": dial tcp 172.234.196.117:6443: connect: connection refused" interval="200ms" Apr 21 10:22:12.525449 kubelet[2185]: I0421 10:22:12.524919 2185 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:22:12.526980 kubelet[2185]: I0421 10:22:12.526966 2185 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:22:12.527062 kubelet[2185]: I0421 10:22:12.527036 2185 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 10:22:12.527136 kubelet[2185]: I0421 10:22:12.527125 2185 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 10:22:12.527238 kubelet[2185]: E0421 10:22:12.527221 2185 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:22:12.535661 kubelet[2185]: E0421 10:22:12.535639 2185 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.196.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.196.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:22:12.542821 kubelet[2185]: I0421 10:22:12.542805 2185 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:22:12.543105 kubelet[2185]: I0421 10:22:12.543092 2185 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:22:12.543188 kubelet[2185]: I0421 10:22:12.543179 2185 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:22:12.544817 kubelet[2185]: I0421 10:22:12.544800 2185 policy_none.go:49] "None policy: Start" Apr 21 10:22:12.544978 kubelet[2185]: I0421 10:22:12.544909 2185 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:22:12.544978 kubelet[2185]: I0421 10:22:12.544931 2185 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:22:12.546650 kubelet[2185]: I0421 10:22:12.546122 2185 policy_none.go:47] "Start" Apr 21 10:22:12.550474 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:22:12.559666 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:22:12.575764 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:22:12.577245 kubelet[2185]: E0421 10:22:12.577227 2185 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:22:12.577675 kubelet[2185]: I0421 10:22:12.577395 2185 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:22:12.577675 kubelet[2185]: I0421 10:22:12.577413 2185 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:22:12.577743 kubelet[2185]: I0421 10:22:12.577717 2185 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:22:12.578897 kubelet[2185]: E0421 10:22:12.578872 2185 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:22:12.578951 kubelet[2185]: E0421 10:22:12.578907 2185 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-196-117\" not found" Apr 21 10:22:12.637103 systemd[1]: Created slice kubepods-burstable-pode3c4c82de191bc7931f1feec7ecd45a6.slice - libcontainer container kubepods-burstable-pode3c4c82de191bc7931f1feec7ecd45a6.slice. Apr 21 10:22:12.650666 kubelet[2185]: E0421 10:22:12.650624 2185 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-196-117\" not found" node="172-234-196-117" Apr 21 10:22:12.653734 systemd[1]: Created slice kubepods-burstable-pod3d9e9f3ab42df55a653b0068e651519d.slice - libcontainer container kubepods-burstable-pod3d9e9f3ab42df55a653b0068e651519d.slice. Apr 21 10:22:12.661233 kubelet[2185]: E0421 10:22:12.661207 2185 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-196-117\" not found" node="172-234-196-117" Apr 21 10:22:12.663773 systemd[1]: Created slice kubepods-burstable-pod29b35efc9cb6bd5f6cd2a0d9aa359a73.slice - libcontainer container kubepods-burstable-pod29b35efc9cb6bd5f6cd2a0d9aa359a73.slice. Apr 21 10:22:12.665341 kubelet[2185]: E0421 10:22:12.665326 2185 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-196-117\" not found" node="172-234-196-117" Apr 21 10:22:12.679460 kubelet[2185]: I0421 10:22:12.679444 2185 kubelet_node_status.go:75] "Attempting to register node" node="172-234-196-117" Apr 21 10:22:12.679759 kubelet[2185]: E0421 10:22:12.679729 2185 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.196.117:6443/api/v1/nodes\": dial tcp 172.234.196.117:6443: connect: connection refused" node="172-234-196-117" Apr 21 10:22:12.710501 kubelet[2185]: E0421 10:22:12.710472 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.196.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-196-117?timeout=10s\": dial tcp 172.234.196.117:6443: connect: connection refused" interval="400ms" Apr 21 10:22:12.796856 kubelet[2185]: I0421 10:22:12.796725 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d9e9f3ab42df55a653b0068e651519d-flexvolume-dir\") pod \"kube-controller-manager-172-234-196-117\" (UID: \"3d9e9f3ab42df55a653b0068e651519d\") " pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:12.796856 kubelet[2185]: I0421 10:22:12.796772 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d9e9f3ab42df55a653b0068e651519d-k8s-certs\") pod \"kube-controller-manager-172-234-196-117\" (UID: \"3d9e9f3ab42df55a653b0068e651519d\") " pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:12.796856 kubelet[2185]: I0421 10:22:12.796793 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d9e9f3ab42df55a653b0068e651519d-kubeconfig\") pod \"kube-controller-manager-172-234-196-117\" (UID: \"3d9e9f3ab42df55a653b0068e651519d\") " pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:12.796856 kubelet[2185]: I0421 10:22:12.796807 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29b35efc9cb6bd5f6cd2a0d9aa359a73-kubeconfig\") pod \"kube-scheduler-172-234-196-117\" (UID: \"29b35efc9cb6bd5f6cd2a0d9aa359a73\") " pod="kube-system/kube-scheduler-172-234-196-117" Apr 21 10:22:12.796856 kubelet[2185]: I0421 10:22:12.796829 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3c4c82de191bc7931f1feec7ecd45a6-ca-certs\") pod \"kube-apiserver-172-234-196-117\" (UID: \"e3c4c82de191bc7931f1feec7ecd45a6\") " pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:12.797045 kubelet[2185]: I0421 10:22:12.796844 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3c4c82de191bc7931f1feec7ecd45a6-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-196-117\" (UID: \"e3c4c82de191bc7931f1feec7ecd45a6\") " pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:12.797045 kubelet[2185]: I0421 10:22:12.796859 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d9e9f3ab42df55a653b0068e651519d-ca-certs\") pod \"kube-controller-manager-172-234-196-117\" (UID: \"3d9e9f3ab42df55a653b0068e651519d\") " pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:12.797045 kubelet[2185]: I0421 10:22:12.796872 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d9e9f3ab42df55a653b0068e651519d-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-196-117\" (UID: \"3d9e9f3ab42df55a653b0068e651519d\") " pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:12.797045 kubelet[2185]: I0421 10:22:12.796887 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3c4c82de191bc7931f1feec7ecd45a6-k8s-certs\") pod \"kube-apiserver-172-234-196-117\" (UID: \"e3c4c82de191bc7931f1feec7ecd45a6\") " pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:12.881745 kubelet[2185]: I0421 10:22:12.881718 2185 kubelet_node_status.go:75] "Attempting to register node" node="172-234-196-117" Apr 21 10:22:12.882084 kubelet[2185]: E0421 10:22:12.882043 2185 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.196.117:6443/api/v1/nodes\": dial tcp 172.234.196.117:6443: connect: connection refused" node="172-234-196-117" Apr 21 10:22:12.953984 kubelet[2185]: E0421 10:22:12.953448 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:12.954219 containerd[1457]: time="2026-04-21T10:22:12.954175488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-196-117,Uid:e3c4c82de191bc7931f1feec7ecd45a6,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:12.962819 kubelet[2185]: E0421 10:22:12.962640 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:12.963010 containerd[1457]: time="2026-04-21T10:22:12.962982998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-196-117,Uid:3d9e9f3ab42df55a653b0068e651519d,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:12.968016 kubelet[2185]: E0421 10:22:12.967973 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:12.968368 containerd[1457]: time="2026-04-21T10:22:12.968318398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-196-117,Uid:29b35efc9cb6bd5f6cd2a0d9aa359a73,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:13.111840 kubelet[2185]: E0421 10:22:13.111714 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.196.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-196-117?timeout=10s\": dial tcp 172.234.196.117:6443: connect: connection refused" interval="800ms" Apr 21 10:22:13.284619 kubelet[2185]: I0421 10:22:13.284579 2185 kubelet_node_status.go:75] "Attempting to register node" node="172-234-196-117" Apr 21 10:22:13.284949 kubelet[2185]: E0421 10:22:13.284917 2185 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.196.117:6443/api/v1/nodes\": dial tcp 172.234.196.117:6443: connect: connection refused" node="172-234-196-117" Apr 21 10:22:13.350089 kubelet[2185]: E0421 10:22:13.350022 2185 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.196.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.196.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:22:13.468912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2008865031.mount: Deactivated successfully. Apr 21 10:22:13.475044 containerd[1457]: time="2026-04-21T10:22:13.475002808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:22:13.476022 containerd[1457]: time="2026-04-21T10:22:13.475979198Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:22:13.477238 containerd[1457]: time="2026-04-21T10:22:13.477036298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 21 10:22:13.477238 containerd[1457]: time="2026-04-21T10:22:13.477211908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:22:13.477887 containerd[1457]: time="2026-04-21T10:22:13.477846298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:22:13.479195 containerd[1457]: time="2026-04-21T10:22:13.478981508Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:22:13.479195 containerd[1457]: time="2026-04-21T10:22:13.479161828Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:22:13.481881 containerd[1457]: time="2026-04-21T10:22:13.481858758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:22:13.483568 containerd[1457]: time="2026-04-21T10:22:13.483537068Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 515.16706ms" Apr 21 10:22:13.486916 containerd[1457]: time="2026-04-21T10:22:13.486892498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.85742ms" Apr 21 10:22:13.496080 containerd[1457]: time="2026-04-21T10:22:13.496017298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.77495ms" Apr 21 10:22:13.586676 kubelet[2185]: E0421 10:22:13.586372 2185 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.196.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.196.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:22:13.590249 containerd[1457]: time="2026-04-21T10:22:13.590001718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:13.590249 containerd[1457]: time="2026-04-21T10:22:13.590212518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:13.590414 containerd[1457]: time="2026-04-21T10:22:13.590240168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:13.590414 containerd[1457]: time="2026-04-21T10:22:13.590340768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:13.594179 containerd[1457]: time="2026-04-21T10:22:13.593873428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:13.594179 containerd[1457]: time="2026-04-21T10:22:13.593947318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:13.594179 containerd[1457]: time="2026-04-21T10:22:13.593958438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:13.594179 containerd[1457]: time="2026-04-21T10:22:13.594043368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:13.604500 containerd[1457]: time="2026-04-21T10:22:13.604260608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:13.606461 containerd[1457]: time="2026-04-21T10:22:13.605833238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:13.606638 containerd[1457]: time="2026-04-21T10:22:13.606604498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:13.606907 containerd[1457]: time="2026-04-21T10:22:13.606873798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:13.625763 systemd[1]: Started cri-containerd-11e4dfc6a36c00da16e730fb04d931250d53a51c4c58306952bcc62331ee1315.scope - libcontainer container 11e4dfc6a36c00da16e730fb04d931250d53a51c4c58306952bcc62331ee1315. Apr 21 10:22:13.632255 systemd[1]: Started cri-containerd-1f48b449551bd361eb7ffc5d69c8142bf9ca6becb6acf1839143434c8e03450a.scope - libcontainer container 1f48b449551bd361eb7ffc5d69c8142bf9ca6becb6acf1839143434c8e03450a. Apr 21 10:22:13.651200 systemd[1]: Started cri-containerd-f6f60e6c11eaf60d58349182fde2f9989a549936de3a82f8ec65662d82e7b23f.scope - libcontainer container f6f60e6c11eaf60d58349182fde2f9989a549936de3a82f8ec65662d82e7b23f. Apr 21 10:22:13.710631 containerd[1457]: time="2026-04-21T10:22:13.710426088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-196-117,Uid:3d9e9f3ab42df55a653b0068e651519d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f48b449551bd361eb7ffc5d69c8142bf9ca6becb6acf1839143434c8e03450a\"" Apr 21 10:22:13.712267 kubelet[2185]: E0421 10:22:13.711827 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:13.715123 containerd[1457]: time="2026-04-21T10:22:13.715099508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-196-117,Uid:29b35efc9cb6bd5f6cd2a0d9aa359a73,Namespace:kube-system,Attempt:0,} returns sandbox id \"11e4dfc6a36c00da16e730fb04d931250d53a51c4c58306952bcc62331ee1315\"" Apr 21 10:22:13.716977 kubelet[2185]: E0421 10:22:13.716848 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:13.722571 containerd[1457]: time="2026-04-21T10:22:13.722420608Z" level=info msg="CreateContainer within sandbox \"1f48b449551bd361eb7ffc5d69c8142bf9ca6becb6acf1839143434c8e03450a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:22:13.724290 containerd[1457]: time="2026-04-21T10:22:13.724270108Z" level=info msg="CreateContainer within sandbox \"11e4dfc6a36c00da16e730fb04d931250d53a51c4c58306952bcc62331ee1315\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:22:13.727120 containerd[1457]: time="2026-04-21T10:22:13.726995598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-196-117,Uid:e3c4c82de191bc7931f1feec7ecd45a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6f60e6c11eaf60d58349182fde2f9989a549936de3a82f8ec65662d82e7b23f\"" Apr 21 10:22:13.727470 kubelet[2185]: E0421 10:22:13.727420 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:13.730948 containerd[1457]: time="2026-04-21T10:22:13.730926548Z" level=info msg="CreateContainer within sandbox \"f6f60e6c11eaf60d58349182fde2f9989a549936de3a82f8ec65662d82e7b23f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:22:13.735539 containerd[1457]: time="2026-04-21T10:22:13.735456698Z" level=info msg="CreateContainer within sandbox \"1f48b449551bd361eb7ffc5d69c8142bf9ca6becb6acf1839143434c8e03450a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c991797ba4bfe98de9356c53e79f6f6d49b0f38215b1b1b988daf4fb409e1f9d\"" Apr 21 10:22:13.736275 containerd[1457]: time="2026-04-21T10:22:13.736246818Z" level=info msg="StartContainer for \"c991797ba4bfe98de9356c53e79f6f6d49b0f38215b1b1b988daf4fb409e1f9d\"" Apr 21 10:22:13.743367 containerd[1457]: time="2026-04-21T10:22:13.743296558Z" level=info msg="CreateContainer within sandbox \"11e4dfc6a36c00da16e730fb04d931250d53a51c4c58306952bcc62331ee1315\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4342d99d802d57eb73c86b069d084e2b1bdb291e7f3e6c04eb47f754edf8dc9e\"" Apr 21 10:22:13.743787 containerd[1457]: time="2026-04-21T10:22:13.743769188Z" level=info msg="StartContainer for \"4342d99d802d57eb73c86b069d084e2b1bdb291e7f3e6c04eb47f754edf8dc9e\"" Apr 21 10:22:13.752231 containerd[1457]: time="2026-04-21T10:22:13.752183298Z" level=info msg="CreateContainer within sandbox \"f6f60e6c11eaf60d58349182fde2f9989a549936de3a82f8ec65662d82e7b23f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d034b6dfb3c77bfbe5f7228af8372a022cc02da8e55e282699f33c3232b08c5\"" Apr 21 10:22:13.753689 containerd[1457]: time="2026-04-21T10:22:13.752753828Z" level=info msg="StartContainer for \"4d034b6dfb3c77bfbe5f7228af8372a022cc02da8e55e282699f33c3232b08c5\"" Apr 21 10:22:13.778389 systemd[1]: Started cri-containerd-c991797ba4bfe98de9356c53e79f6f6d49b0f38215b1b1b988daf4fb409e1f9d.scope - libcontainer container c991797ba4bfe98de9356c53e79f6f6d49b0f38215b1b1b988daf4fb409e1f9d. Apr 21 10:22:13.792195 systemd[1]: Started cri-containerd-4342d99d802d57eb73c86b069d084e2b1bdb291e7f3e6c04eb47f754edf8dc9e.scope - libcontainer container 4342d99d802d57eb73c86b069d084e2b1bdb291e7f3e6c04eb47f754edf8dc9e. Apr 21 10:22:13.800599 systemd[1]: Started cri-containerd-4d034b6dfb3c77bfbe5f7228af8372a022cc02da8e55e282699f33c3232b08c5.scope - libcontainer container 4d034b6dfb3c77bfbe5f7228af8372a022cc02da8e55e282699f33c3232b08c5. Apr 21 10:22:13.855642 containerd[1457]: time="2026-04-21T10:22:13.855544908Z" level=info msg="StartContainer for \"c991797ba4bfe98de9356c53e79f6f6d49b0f38215b1b1b988daf4fb409e1f9d\" returns successfully" Apr 21 10:22:13.860637 containerd[1457]: time="2026-04-21T10:22:13.860551488Z" level=info msg="StartContainer for \"4d034b6dfb3c77bfbe5f7228af8372a022cc02da8e55e282699f33c3232b08c5\" returns successfully" Apr 21 10:22:13.911580 kubelet[2185]: E0421 10:22:13.911525 2185 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.196.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.196.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:22:13.912365 containerd[1457]: time="2026-04-21T10:22:13.912271068Z" level=info msg="StartContainer for \"4342d99d802d57eb73c86b069d084e2b1bdb291e7f3e6c04eb47f754edf8dc9e\" returns successfully" Apr 21 10:22:13.913481 kubelet[2185]: E0421 10:22:13.913453 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.196.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-196-117?timeout=10s\": dial tcp 172.234.196.117:6443: connect: connection refused" interval="1.6s" Apr 21 10:22:14.088432 kubelet[2185]: I0421 10:22:14.088312 2185 kubelet_node_status.go:75] "Attempting to register node" node="172-234-196-117" Apr 21 10:22:14.551514 kubelet[2185]: E0421 10:22:14.551462 2185 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-196-117\" not found" node="172-234-196-117" Apr 21 10:22:14.552077 kubelet[2185]: E0421 10:22:14.551607 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:14.552077 kubelet[2185]: E0421 10:22:14.551865 2185 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-196-117\" not found" node="172-234-196-117" Apr 21 10:22:14.552077 kubelet[2185]: E0421 10:22:14.551947 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:14.555857 kubelet[2185]: E0421 10:22:14.555825 2185 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-196-117\" not found" node="172-234-196-117" Apr 21 10:22:14.555958 kubelet[2185]: E0421 10:22:14.555930 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:15.267856 kubelet[2185]: I0421 10:22:15.266647 2185 kubelet_node_status.go:78] "Successfully registered node" node="172-234-196-117" Apr 21 10:22:15.304334 kubelet[2185]: I0421 10:22:15.304278 2185 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:15.328077 kubelet[2185]: E0421 10:22:15.325757 2185 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-196-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:15.328077 kubelet[2185]: I0421 10:22:15.325814 2185 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-196-117" Apr 21 10:22:15.331437 kubelet[2185]: E0421 10:22:15.331194 2185 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-196-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-196-117" Apr 21 10:22:15.331437 kubelet[2185]: I0421 10:22:15.331432 2185 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:15.335381 kubelet[2185]: E0421 10:22:15.335339 2185 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-196-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:15.482737 kubelet[2185]: I0421 10:22:15.482703 2185 apiserver.go:52] "Watching apiserver" Apr 21 10:22:15.496653 kubelet[2185]: I0421 10:22:15.496619 2185 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:22:15.556296 kubelet[2185]: I0421 10:22:15.555733 2185 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:15.556296 kubelet[2185]: I0421 10:22:15.555932 2185 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-196-117" Apr 21 10:22:15.559166 kubelet[2185]: E0421 10:22:15.558904 2185 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-196-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:15.559166 kubelet[2185]: E0421 10:22:15.559075 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:15.559698 kubelet[2185]: E0421 10:22:15.559653 2185 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-196-117\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-196-117" Apr 21 10:22:15.559926 kubelet[2185]: E0421 10:22:15.559789 2185 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:16.602908 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 21 10:22:17.018306 systemd[1]: Reloading requested from client PID 2474 ('systemctl') (unit session-7.scope)... Apr 21 10:22:17.018323 systemd[1]: Reloading... Apr 21 10:22:17.140143 zram_generator::config[2520]: No configuration found. Apr 21 10:22:17.246733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:22:17.332693 systemd[1]: Reloading finished in 313 ms. Apr 21 10:22:17.380777 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:22:17.394140 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:22:17.394399 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:17.400542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:22:17.552618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:17.563626 (kubelet)[2564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:22:17.604416 kubelet[2564]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:22:17.604416 kubelet[2564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:22:17.604416 kubelet[2564]: I0421 10:22:17.604251 2564 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:22:17.613706 kubelet[2564]: I0421 10:22:17.613674 2564 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 21 10:22:17.613706 kubelet[2564]: I0421 10:22:17.613696 2564 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:22:17.613797 kubelet[2564]: I0421 10:22:17.613720 2564 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:22:17.613797 kubelet[2564]: I0421 10:22:17.613732 2564 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:22:17.613891 kubelet[2564]: I0421 10:22:17.613866 2564 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:22:17.615025 kubelet[2564]: I0421 10:22:17.614999 2564 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:22:17.618912 kubelet[2564]: I0421 10:22:17.618884 2564 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:22:17.620101 kubelet[2564]: E0421 10:22:17.620066 2564 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:22:17.620146 kubelet[2564]: I0421 10:22:17.620107 2564 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:22:17.624194 kubelet[2564]: I0421 10:22:17.624144 2564 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:22:17.624460 kubelet[2564]: I0421 10:22:17.624422 2564 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:22:17.624583 kubelet[2564]: I0421 10:22:17.624449 2564 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-196-117","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:22:17.624583 kubelet[2564]: I0421 10:22:17.624573 2564 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:22:17.624583 kubelet[2564]: I0421 10:22:17.624581 2564 container_manager_linux.go:306] "Creating device plugin manager" Apr 21 10:22:17.624756 kubelet[2564]: I0421 10:22:17.624601 2564 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:22:17.624809 kubelet[2564]: I0421 10:22:17.624774 2564 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:22:17.628066 kubelet[2564]: I0421 10:22:17.625002 2564 kubelet.go:475] "Attempting to sync node with API server" Apr 21 10:22:17.628066 kubelet[2564]: I0421 10:22:17.625019 2564 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:22:17.628066 kubelet[2564]: I0421 10:22:17.625069 2564 kubelet.go:387] "Adding apiserver pod source" Apr 21 10:22:17.628066 kubelet[2564]: I0421 10:22:17.625080 2564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:22:17.641079 kubelet[2564]: I0421 10:22:17.640249 2564 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:22:17.641079 kubelet[2564]: I0421 10:22:17.640885 2564 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:22:17.641079 kubelet[2564]: I0421 10:22:17.640910 2564 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:22:17.646977 kubelet[2564]: I0421 10:22:17.646879 2564 server.go:1262] "Started kubelet" Apr 21 10:22:17.651598 kubelet[2564]: I0421 10:22:17.650728 2564 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:22:17.651598 kubelet[2564]: I0421 10:22:17.650771 2564 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:22:17.651598 kubelet[2564]: I0421 10:22:17.651016 2564 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:22:17.651598 kubelet[2564]: I0421 10:22:17.651110 2564 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:22:17.652070 kubelet[2564]: I0421 10:22:17.652033 2564 server.go:310] "Adding debug handlers to kubelet server" Apr 21 10:22:17.653733 kubelet[2564]: I0421 10:22:17.653619 2564 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:22:17.657882 kubelet[2564]: E0421 10:22:17.657841 2564 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:22:17.659565 kubelet[2564]: I0421 10:22:17.659536 2564 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:22:17.663154 kubelet[2564]: I0421 10:22:17.663129 2564 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 21 10:22:17.663227 kubelet[2564]: I0421 10:22:17.663207 2564 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:22:17.663354 kubelet[2564]: I0421 10:22:17.663332 2564 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:22:17.666210 kubelet[2564]: I0421 10:22:17.666168 2564 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:22:17.666640 kubelet[2564]: I0421 10:22:17.666597 2564 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:22:17.669927 kubelet[2564]: I0421 10:22:17.669887 2564 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:22:17.674583 kubelet[2564]: I0421 10:22:17.674556 2564 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:22:17.675991 kubelet[2564]: I0421 10:22:17.675963 2564 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:22:17.676073 kubelet[2564]: I0421 10:22:17.676063 2564 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 21 10:22:17.676141 kubelet[2564]: I0421 10:22:17.676132 2564 kubelet.go:2428] "Starting kubelet main sync loop" Apr 21 10:22:17.676457 kubelet[2564]: E0421 10:22:17.676440 2564 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:22:17.715439 kubelet[2564]: I0421 10:22:17.715411 2564 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:22:17.715595 kubelet[2564]: I0421 10:22:17.715582 2564 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:22:17.715679 kubelet[2564]: I0421 10:22:17.715669 2564 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:22:17.715840 kubelet[2564]: I0421 10:22:17.715827 2564 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:22:17.715905 kubelet[2564]: I0421 10:22:17.715885 2564 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:22:17.715949 kubelet[2564]: I0421 10:22:17.715941 2564 policy_none.go:49] "None policy: Start" Apr 21 10:22:17.715994 kubelet[2564]: I0421 10:22:17.715985 2564 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:22:17.716040 kubelet[2564]: I0421 10:22:17.716031 2564 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:22:17.716198 kubelet[2564]: I0421 10:22:17.716185 2564 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:22:17.716261 kubelet[2564]: I0421 10:22:17.716252 2564 policy_none.go:47] "Start" Apr 21 10:22:17.720540 kubelet[2564]: E0421 10:22:17.720340 2564 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:22:17.720753 kubelet[2564]: I0421 10:22:17.720740 2564 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:22:17.720826 kubelet[2564]: I0421 10:22:17.720802 2564 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:22:17.722982 kubelet[2564]: I0421 10:22:17.722968 2564 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:22:17.724175 kubelet[2564]: E0421 10:22:17.724160 2564 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:22:17.777680 kubelet[2564]: I0421 10:22:17.777653 2564 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-196-117" Apr 21 10:22:17.777846 kubelet[2564]: I0421 10:22:17.777816 2564 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:17.778794 kubelet[2564]: I0421 10:22:17.777742 2564 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:17.827561 kubelet[2564]: I0421 10:22:17.827535 2564 kubelet_node_status.go:75] "Attempting to register node" node="172-234-196-117" Apr 21 10:22:17.835520 kubelet[2564]: I0421 10:22:17.835492 2564 kubelet_node_status.go:124] "Node was previously registered" node="172-234-196-117" Apr 21 10:22:17.835689 kubelet[2564]: I0421 10:22:17.835653 2564 kubelet_node_status.go:78] "Successfully registered node" node="172-234-196-117" Apr 21 10:22:17.865087 kubelet[2564]: I0421 10:22:17.864863 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d9e9f3ab42df55a653b0068e651519d-ca-certs\") pod \"kube-controller-manager-172-234-196-117\" (UID: \"3d9e9f3ab42df55a653b0068e651519d\") " pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:17.865087 kubelet[2564]: I0421 10:22:17.864903 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d9e9f3ab42df55a653b0068e651519d-flexvolume-dir\") pod \"kube-controller-manager-172-234-196-117\" (UID: \"3d9e9f3ab42df55a653b0068e651519d\") " pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:17.865087 kubelet[2564]: I0421 10:22:17.864920 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29b35efc9cb6bd5f6cd2a0d9aa359a73-kubeconfig\") pod \"kube-scheduler-172-234-196-117\" (UID: \"29b35efc9cb6bd5f6cd2a0d9aa359a73\") " pod="kube-system/kube-scheduler-172-234-196-117" Apr 21 10:22:17.865087 kubelet[2564]: I0421 10:22:17.864937 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3c4c82de191bc7931f1feec7ecd45a6-k8s-certs\") pod \"kube-apiserver-172-234-196-117\" (UID: \"e3c4c82de191bc7931f1feec7ecd45a6\") " pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:17.865087 kubelet[2564]: I0421 10:22:17.864988 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3c4c82de191bc7931f1feec7ecd45a6-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-196-117\" (UID: \"e3c4c82de191bc7931f1feec7ecd45a6\") " pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:17.865303 kubelet[2564]: I0421 10:22:17.865018 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d9e9f3ab42df55a653b0068e651519d-k8s-certs\") pod \"kube-controller-manager-172-234-196-117\" (UID: \"3d9e9f3ab42df55a653b0068e651519d\") " pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:17.865303 kubelet[2564]: I0421 10:22:17.865042 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d9e9f3ab42df55a653b0068e651519d-kubeconfig\") pod \"kube-controller-manager-172-234-196-117\" (UID: \"3d9e9f3ab42df55a653b0068e651519d\") " pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:17.865303 kubelet[2564]: I0421 10:22:17.865093 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d9e9f3ab42df55a653b0068e651519d-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-196-117\" (UID: \"3d9e9f3ab42df55a653b0068e651519d\") " pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:17.865303 kubelet[2564]: I0421 10:22:17.865111 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3c4c82de191bc7931f1feec7ecd45a6-ca-certs\") pod \"kube-apiserver-172-234-196-117\" (UID: \"e3c4c82de191bc7931f1feec7ecd45a6\") " pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:18.085102 kubelet[2564]: E0421 10:22:18.084767 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:18.086492 kubelet[2564]: E0421 10:22:18.086449 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:18.086786 kubelet[2564]: E0421 10:22:18.086751 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:18.628150 kubelet[2564]: I0421 10:22:18.627845 2564 apiserver.go:52] "Watching apiserver" Apr 21 10:22:18.664836 kubelet[2564]: I0421 10:22:18.664190 2564 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:22:18.696262 kubelet[2564]: I0421 10:22:18.696227 2564 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:18.696516 kubelet[2564]: I0421 10:22:18.696493 2564 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:18.697167 kubelet[2564]: I0421 10:22:18.697142 2564 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-196-117" Apr 21 10:22:18.705465 kubelet[2564]: E0421 10:22:18.705433 2564 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-196-117\" already exists" pod="kube-system/kube-apiserver-172-234-196-117" Apr 21 10:22:18.705649 kubelet[2564]: E0421 10:22:18.705622 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:18.708378 kubelet[2564]: E0421 10:22:18.708165 2564 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-196-117\" already exists" pod="kube-system/kube-controller-manager-172-234-196-117" Apr 21 10:22:18.708378 kubelet[2564]: E0421 10:22:18.708265 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:18.708378 kubelet[2564]: E0421 10:22:18.708319 2564 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-196-117\" already exists" pod="kube-system/kube-scheduler-172-234-196-117" Apr 21 10:22:18.708483 kubelet[2564]: E0421 10:22:18.708400 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:18.729830 kubelet[2564]: I0421 10:22:18.729751 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-196-117" podStartSLOduration=1.729722678 podStartE2EDuration="1.729722678s" podCreationTimestamp="2026-04-21 10:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:18.728261608 +0000 UTC m=+1.160407351" watchObservedRunningTime="2026-04-21 10:22:18.729722678 +0000 UTC m=+1.161868421" Apr 21 10:22:18.765034 kubelet[2564]: I0421 10:22:18.764973 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-196-117" podStartSLOduration=1.764960518 podStartE2EDuration="1.764960518s" podCreationTimestamp="2026-04-21 10:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:18.751527078 +0000 UTC m=+1.183672831" watchObservedRunningTime="2026-04-21 10:22:18.764960518 +0000 UTC m=+1.197106261" Apr 21 10:22:18.808550 kubelet[2564]: I0421 10:22:18.808480 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-196-117" podStartSLOduration=1.808444788 podStartE2EDuration="1.808444788s" podCreationTimestamp="2026-04-21 10:22:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:18.765187818 +0000 UTC m=+1.197333571" watchObservedRunningTime="2026-04-21 10:22:18.808444788 +0000 UTC m=+1.240590531" Apr 21 10:22:19.699081 kubelet[2564]: E0421 10:22:19.698217 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:19.699081 kubelet[2564]: E0421 10:22:19.698231 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:19.699081 kubelet[2564]: E0421 10:22:19.698710 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:20.960357 kubelet[2564]: E0421 10:22:20.960294 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:22.493038 kubelet[2564]: E0421 10:22:22.492963 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:23.007986 kubelet[2564]: I0421 10:22:23.007912 2564 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:22:23.008320 containerd[1457]: time="2026-04-21T10:22:23.008286393Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:22:23.008650 kubelet[2564]: I0421 10:22:23.008453 2564 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:22:24.095943 systemd[1]: Created slice kubepods-besteffort-podd3b4d844_5866_4091_9f50_4aa794a0b190.slice - libcontainer container kubepods-besteffort-podd3b4d844_5866_4091_9f50_4aa794a0b190.slice. Apr 21 10:22:24.104283 kubelet[2564]: I0421 10:22:24.104256 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d3b4d844-5866-4091-9f50-4aa794a0b190-kube-proxy\") pod \"kube-proxy-gztkc\" (UID: \"d3b4d844-5866-4091-9f50-4aa794a0b190\") " pod="kube-system/kube-proxy-gztkc" Apr 21 10:22:24.104627 kubelet[2564]: I0421 10:22:24.104293 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3b4d844-5866-4091-9f50-4aa794a0b190-xtables-lock\") pod \"kube-proxy-gztkc\" (UID: \"d3b4d844-5866-4091-9f50-4aa794a0b190\") " pod="kube-system/kube-proxy-gztkc" Apr 21 10:22:24.104627 kubelet[2564]: I0421 10:22:24.104309 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3b4d844-5866-4091-9f50-4aa794a0b190-lib-modules\") pod \"kube-proxy-gztkc\" (UID: \"d3b4d844-5866-4091-9f50-4aa794a0b190\") " pod="kube-system/kube-proxy-gztkc" Apr 21 10:22:24.104627 kubelet[2564]: I0421 10:22:24.104325 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l7c5\" (UniqueName: \"kubernetes.io/projected/d3b4d844-5866-4091-9f50-4aa794a0b190-kube-api-access-2l7c5\") pod \"kube-proxy-gztkc\" (UID: \"d3b4d844-5866-4091-9f50-4aa794a0b190\") " pod="kube-system/kube-proxy-gztkc" Apr 21 10:22:24.223026 systemd[1]: Created slice kubepods-besteffort-pod8d26df54_e38b_461d_8ab3_fa608cbf072e.slice - libcontainer container kubepods-besteffort-pod8d26df54_e38b_461d_8ab3_fa608cbf072e.slice. Apr 21 10:22:24.306252 kubelet[2564]: I0421 10:22:24.306188 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl5hl\" (UniqueName: \"kubernetes.io/projected/8d26df54-e38b-461d-8ab3-fa608cbf072e-kube-api-access-hl5hl\") pod \"tigera-operator-5588576f44-shm5w\" (UID: \"8d26df54-e38b-461d-8ab3-fa608cbf072e\") " pod="tigera-operator/tigera-operator-5588576f44-shm5w" Apr 21 10:22:24.306252 kubelet[2564]: I0421 10:22:24.306242 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8d26df54-e38b-461d-8ab3-fa608cbf072e-var-lib-calico\") pod \"tigera-operator-5588576f44-shm5w\" (UID: \"8d26df54-e38b-461d-8ab3-fa608cbf072e\") " pod="tigera-operator/tigera-operator-5588576f44-shm5w" Apr 21 10:22:24.404832 kubelet[2564]: E0421 10:22:24.404799 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:24.405749 containerd[1457]: time="2026-04-21T10:22:24.405411692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gztkc,Uid:d3b4d844-5866-4091-9f50-4aa794a0b190,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:24.431931 containerd[1457]: time="2026-04-21T10:22:24.431819586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:24.431931 containerd[1457]: time="2026-04-21T10:22:24.431878849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:24.431931 containerd[1457]: time="2026-04-21T10:22:24.431892239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:24.432291 containerd[1457]: time="2026-04-21T10:22:24.431969263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:24.450907 systemd[1]: run-containerd-runc-k8s.io-d35141b9c0c38d3f585eae6c322981745ceef91cb2b774f5f5f15a48fb6be978-runc.qtKe2k.mount: Deactivated successfully. Apr 21 10:22:24.463209 systemd[1]: Started cri-containerd-d35141b9c0c38d3f585eae6c322981745ceef91cb2b774f5f5f15a48fb6be978.scope - libcontainer container d35141b9c0c38d3f585eae6c322981745ceef91cb2b774f5f5f15a48fb6be978. Apr 21 10:22:24.487570 containerd[1457]: time="2026-04-21T10:22:24.487472716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gztkc,Uid:d3b4d844-5866-4091-9f50-4aa794a0b190,Namespace:kube-system,Attempt:0,} returns sandbox id \"d35141b9c0c38d3f585eae6c322981745ceef91cb2b774f5f5f15a48fb6be978\"" Apr 21 10:22:24.490758 kubelet[2564]: E0421 10:22:24.490660 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:24.495072 containerd[1457]: time="2026-04-21T10:22:24.494971265Z" level=info msg="CreateContainer within sandbox \"d35141b9c0c38d3f585eae6c322981745ceef91cb2b774f5f5f15a48fb6be978\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:22:24.507396 containerd[1457]: time="2026-04-21T10:22:24.507305788Z" level=info msg="CreateContainer within sandbox \"d35141b9c0c38d3f585eae6c322981745ceef91cb2b774f5f5f15a48fb6be978\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"871e12b14229ee41ae5de36a7f07ad3b56b4800bb3971462ae194de35cba264c\"" Apr 21 10:22:24.508129 containerd[1457]: time="2026-04-21T10:22:24.507933949Z" level=info msg="StartContainer for \"871e12b14229ee41ae5de36a7f07ad3b56b4800bb3971462ae194de35cba264c\"" Apr 21 10:22:24.528624 containerd[1457]: time="2026-04-21T10:22:24.528557011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-shm5w,Uid:8d26df54-e38b-461d-8ab3-fa608cbf072e,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:22:24.543173 systemd[1]: Started cri-containerd-871e12b14229ee41ae5de36a7f07ad3b56b4800bb3971462ae194de35cba264c.scope - libcontainer container 871e12b14229ee41ae5de36a7f07ad3b56b4800bb3971462ae194de35cba264c. Apr 21 10:22:24.557186 containerd[1457]: time="2026-04-21T10:22:24.556260810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:24.557186 containerd[1457]: time="2026-04-21T10:22:24.557011638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:24.557186 containerd[1457]: time="2026-04-21T10:22:24.557023048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:24.557186 containerd[1457]: time="2026-04-21T10:22:24.557109933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:24.586347 systemd[1]: Started cri-containerd-202f66be91c42bb491aeec997ac269a670521534f7cfb2cce7c6b04e1826c57c.scope - libcontainer container 202f66be91c42bb491aeec997ac269a670521534f7cfb2cce7c6b04e1826c57c. Apr 21 10:22:24.593896 containerd[1457]: time="2026-04-21T10:22:24.593863069Z" level=info msg="StartContainer for \"871e12b14229ee41ae5de36a7f07ad3b56b4800bb3971462ae194de35cba264c\" returns successfully" Apr 21 10:22:24.647077 containerd[1457]: time="2026-04-21T10:22:24.646981981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-shm5w,Uid:8d26df54-e38b-461d-8ab3-fa608cbf072e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"202f66be91c42bb491aeec997ac269a670521534f7cfb2cce7c6b04e1826c57c\"" Apr 21 10:22:24.650670 containerd[1457]: time="2026-04-21T10:22:24.650538241Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:22:24.707927 kubelet[2564]: E0421 10:22:24.707784 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:24.717352 kubelet[2564]: I0421 10:22:24.717277 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gztkc" podStartSLOduration=0.717262691 podStartE2EDuration="717.262691ms" podCreationTimestamp="2026-04-21 10:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:24.716287321 +0000 UTC m=+7.148433064" watchObservedRunningTime="2026-04-21 10:22:24.717262691 +0000 UTC m=+7.149408434" Apr 21 10:22:25.425552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42436806.mount: Deactivated successfully. Apr 21 10:22:25.989162 kubelet[2564]: E0421 10:22:25.988152 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:26.418955 containerd[1457]: time="2026-04-21T10:22:26.418908784Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:26.420323 containerd[1457]: time="2026-04-21T10:22:26.420096017Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:22:26.421094 containerd[1457]: time="2026-04-21T10:22:26.420829039Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:26.423038 containerd[1457]: time="2026-04-21T10:22:26.422996646Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:26.424066 containerd[1457]: time="2026-04-21T10:22:26.423712297Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.773145995s" Apr 21 10:22:26.424066 containerd[1457]: time="2026-04-21T10:22:26.423928027Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:22:26.427925 containerd[1457]: time="2026-04-21T10:22:26.427874672Z" level=info msg="CreateContainer within sandbox \"202f66be91c42bb491aeec997ac269a670521534f7cfb2cce7c6b04e1826c57c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:22:26.451343 containerd[1457]: time="2026-04-21T10:22:26.451287531Z" level=info msg="CreateContainer within sandbox \"202f66be91c42bb491aeec997ac269a670521534f7cfb2cce7c6b04e1826c57c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"817204a60c2cb55845da62157a6cd13f1d8dabfbc32cc3d30887c88d0010be30\"" Apr 21 10:22:26.452824 containerd[1457]: time="2026-04-21T10:22:26.452796958Z" level=info msg="StartContainer for \"817204a60c2cb55845da62157a6cd13f1d8dabfbc32cc3d30887c88d0010be30\"" Apr 21 10:22:26.487185 systemd[1]: Started cri-containerd-817204a60c2cb55845da62157a6cd13f1d8dabfbc32cc3d30887c88d0010be30.scope - libcontainer container 817204a60c2cb55845da62157a6cd13f1d8dabfbc32cc3d30887c88d0010be30. Apr 21 10:22:26.515389 containerd[1457]: time="2026-04-21T10:22:26.515261871Z" level=info msg="StartContainer for \"817204a60c2cb55845da62157a6cd13f1d8dabfbc32cc3d30887c88d0010be30\" returns successfully" Apr 21 10:22:26.713547 kubelet[2564]: E0421 10:22:26.713445 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:26.724464 kubelet[2564]: I0421 10:22:26.724214 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-shm5w" podStartSLOduration=0.949033154 podStartE2EDuration="2.724186874s" podCreationTimestamp="2026-04-21 10:22:24 +0000 UTC" firstStartedPulling="2026-04-21 10:22:24.649582903 +0000 UTC m=+7.081728646" lastFinishedPulling="2026-04-21 10:22:26.424736623 +0000 UTC m=+8.856882366" observedRunningTime="2026-04-21 10:22:26.722860175 +0000 UTC m=+9.155005918" watchObservedRunningTime="2026-04-21 10:22:26.724186874 +0000 UTC m=+9.156332617" Apr 21 10:22:30.177483 sudo[1682]: pam_unix(sudo:session): session closed for user root Apr 21 10:22:30.281850 sshd[1679]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:30.287504 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:22:30.287844 systemd[1]: sshd@6-172.234.196.117:22-50.85.169.122:57934.service: Deactivated successfully. Apr 21 10:22:30.293349 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:22:30.293563 systemd[1]: session-7.scope: Consumed 6.072s CPU time, 159.7M memory peak, 0B memory swap peak. Apr 21 10:22:30.294364 systemd-logind[1447]: Removed session 7. Apr 21 10:22:30.663603 update_engine[1449]: I20260421 10:22:30.662878 1449 update_attempter.cc:509] Updating boot flags... Apr 21 10:22:30.721120 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2962) Apr 21 10:22:30.965998 kubelet[2564]: E0421 10:22:30.965861 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:31.728083 kubelet[2564]: E0421 10:22:31.725979 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:32.498116 kubelet[2564]: E0421 10:22:32.498071 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:33.338862 systemd[1]: Created slice kubepods-besteffort-poda2461b41_a210_454f_adfd_8b2467ffd6b8.slice - libcontainer container kubepods-besteffort-poda2461b41_a210_454f_adfd_8b2467ffd6b8.slice. Apr 21 10:22:33.373897 kubelet[2564]: I0421 10:22:33.373866 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a2461b41-a210-454f-adfd-8b2467ffd6b8-typha-certs\") pod \"calico-typha-599dd45b-x7c9w\" (UID: \"a2461b41-a210-454f-adfd-8b2467ffd6b8\") " pod="calico-system/calico-typha-599dd45b-x7c9w" Apr 21 10:22:33.375565 kubelet[2564]: I0421 10:22:33.374026 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdkss\" (UniqueName: \"kubernetes.io/projected/a2461b41-a210-454f-adfd-8b2467ffd6b8-kube-api-access-qdkss\") pod \"calico-typha-599dd45b-x7c9w\" (UID: \"a2461b41-a210-454f-adfd-8b2467ffd6b8\") " pod="calico-system/calico-typha-599dd45b-x7c9w" Apr 21 10:22:33.375565 kubelet[2564]: I0421 10:22:33.374069 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2461b41-a210-454f-adfd-8b2467ffd6b8-tigera-ca-bundle\") pod \"calico-typha-599dd45b-x7c9w\" (UID: \"a2461b41-a210-454f-adfd-8b2467ffd6b8\") " pod="calico-system/calico-typha-599dd45b-x7c9w" Apr 21 10:22:33.471876 systemd[1]: Created slice kubepods-besteffort-pod0d06d640_6197_4c6f_ab12_90d0cf7a9cae.slice - libcontainer container kubepods-besteffort-pod0d06d640_6197_4c6f_ab12_90d0cf7a9cae.slice. Apr 21 10:22:33.565394 kubelet[2564]: E0421 10:22:33.565136 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84zl9" podUID="0efc2a75-1741-47b4-a6a6-a697ca685699" Apr 21 10:22:33.575625 kubelet[2564]: I0421 10:22:33.575330 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-bpffs\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575625 kubelet[2564]: I0421 10:22:33.575360 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-sys-fs\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575625 kubelet[2564]: I0421 10:22:33.575376 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-xtables-lock\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575625 kubelet[2564]: I0421 10:22:33.575391 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-policysync\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575625 kubelet[2564]: I0421 10:22:33.575404 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-tigera-ca-bundle\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575804 kubelet[2564]: I0421 10:22:33.575419 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-flexvol-driver-host\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575804 kubelet[2564]: I0421 10:22:33.575431 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-var-lib-calico\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575804 kubelet[2564]: I0421 10:22:33.575445 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-cni-bin-dir\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575804 kubelet[2564]: I0421 10:22:33.575459 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-cni-log-dir\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575804 kubelet[2564]: I0421 10:22:33.575473 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-nodeproc\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575903 kubelet[2564]: I0421 10:22:33.575486 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5lwj\" (UniqueName: \"kubernetes.io/projected/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-kube-api-access-k5lwj\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575903 kubelet[2564]: I0421 10:22:33.575500 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-cni-net-dir\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575903 kubelet[2564]: I0421 10:22:33.575512 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-lib-modules\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575903 kubelet[2564]: I0421 10:22:33.575525 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-node-certs\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.575903 kubelet[2564]: I0421 10:22:33.575537 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0d06d640-6197-4c6f-ab12-90d0cf7a9cae-var-run-calico\") pod \"calico-node-5cd7d\" (UID: \"0d06d640-6197-4c6f-ab12-90d0cf7a9cae\") " pod="calico-system/calico-node-5cd7d" Apr 21 10:22:33.647081 kubelet[2564]: E0421 10:22:33.646880 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:33.648210 containerd[1457]: time="2026-04-21T10:22:33.648173087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599dd45b-x7c9w,Uid:a2461b41-a210-454f-adfd-8b2467ffd6b8,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:33.669609 containerd[1457]: time="2026-04-21T10:22:33.669495549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:33.670407 containerd[1457]: time="2026-04-21T10:22:33.670276251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:33.670407 containerd[1457]: time="2026-04-21T10:22:33.670324773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:33.671090 containerd[1457]: time="2026-04-21T10:22:33.670610581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:33.680037 kubelet[2564]: I0421 10:22:33.677239 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0efc2a75-1741-47b4-a6a6-a697ca685699-kubelet-dir\") pod \"csi-node-driver-84zl9\" (UID: \"0efc2a75-1741-47b4-a6a6-a697ca685699\") " pod="calico-system/csi-node-driver-84zl9" Apr 21 10:22:33.680037 kubelet[2564]: I0421 10:22:33.677285 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0efc2a75-1741-47b4-a6a6-a697ca685699-registration-dir\") pod \"csi-node-driver-84zl9\" (UID: \"0efc2a75-1741-47b4-a6a6-a697ca685699\") " pod="calico-system/csi-node-driver-84zl9" Apr 21 10:22:33.680037 kubelet[2564]: I0421 10:22:33.677307 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0efc2a75-1741-47b4-a6a6-a697ca685699-varrun\") pod \"csi-node-driver-84zl9\" (UID: \"0efc2a75-1741-47b4-a6a6-a697ca685699\") " pod="calico-system/csi-node-driver-84zl9" Apr 21 10:22:33.680037 kubelet[2564]: I0421 10:22:33.677334 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0efc2a75-1741-47b4-a6a6-a697ca685699-socket-dir\") pod \"csi-node-driver-84zl9\" (UID: \"0efc2a75-1741-47b4-a6a6-a697ca685699\") " pod="calico-system/csi-node-driver-84zl9" Apr 21 10:22:33.680037 kubelet[2564]: I0421 10:22:33.677371 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5k29\" (UniqueName: \"kubernetes.io/projected/0efc2a75-1741-47b4-a6a6-a697ca685699-kube-api-access-n5k29\") pod \"csi-node-driver-84zl9\" (UID: \"0efc2a75-1741-47b4-a6a6-a697ca685699\") " pod="calico-system/csi-node-driver-84zl9" Apr 21 10:22:33.687671 kubelet[2564]: E0421 10:22:33.687648 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.687877 kubelet[2564]: W0421 10:22:33.687786 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.687877 kubelet[2564]: E0421 10:22:33.687815 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.688621 kubelet[2564]: E0421 10:22:33.688591 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.690995 kubelet[2564]: W0421 10:22:33.690976 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.691104 kubelet[2564]: E0421 10:22:33.691091 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.691851 kubelet[2564]: E0421 10:22:33.691839 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.692028 kubelet[2564]: W0421 10:22:33.691974 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.692028 kubelet[2564]: E0421 10:22:33.692000 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.698081 kubelet[2564]: E0421 10:22:33.698029 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.698308 kubelet[2564]: W0421 10:22:33.698291 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.701513 kubelet[2564]: E0421 10:22:33.701496 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.706075 kubelet[2564]: E0421 10:22:33.704415 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.706075 kubelet[2564]: W0421 10:22:33.704428 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.706075 kubelet[2564]: E0421 10:22:33.704441 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.706648 kubelet[2564]: E0421 10:22:33.706616 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.706713 kubelet[2564]: W0421 10:22:33.706701 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.706759 kubelet[2564]: E0421 10:22:33.706749 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.707098 kubelet[2564]: E0421 10:22:33.707085 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.707176 kubelet[2564]: W0421 10:22:33.707163 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.707226 kubelet[2564]: E0421 10:22:33.707216 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.707227 systemd[1]: Started cri-containerd-28c1fafe1f0560cef0b0dbe63f1d39f84d52d33591fee746f7bfbe1df8fb1365.scope - libcontainer container 28c1fafe1f0560cef0b0dbe63f1d39f84d52d33591fee746f7bfbe1df8fb1365. Apr 21 10:22:33.707543 kubelet[2564]: E0421 10:22:33.707531 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.707594 kubelet[2564]: W0421 10:22:33.707584 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.707648 kubelet[2564]: E0421 10:22:33.707637 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.707954 kubelet[2564]: E0421 10:22:33.707943 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.708005 kubelet[2564]: W0421 10:22:33.707994 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.708157 kubelet[2564]: E0421 10:22:33.708043 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.708459 kubelet[2564]: E0421 10:22:33.708447 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.708518 kubelet[2564]: W0421 10:22:33.708507 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.708575 kubelet[2564]: E0421 10:22:33.708562 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.708815 kubelet[2564]: E0421 10:22:33.708804 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.708867 kubelet[2564]: W0421 10:22:33.708857 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.708908 kubelet[2564]: E0421 10:22:33.708898 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.709181 kubelet[2564]: E0421 10:22:33.709170 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.709370 kubelet[2564]: W0421 10:22:33.709359 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.709436 kubelet[2564]: E0421 10:22:33.709425 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.709740 kubelet[2564]: E0421 10:22:33.709728 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.709791 kubelet[2564]: W0421 10:22:33.709780 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.709838 kubelet[2564]: E0421 10:22:33.709828 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.710334 kubelet[2564]: E0421 10:22:33.710322 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.710404 kubelet[2564]: W0421 10:22:33.710390 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.710465 kubelet[2564]: E0421 10:22:33.710454 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.710727 kubelet[2564]: E0421 10:22:33.710716 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.710787 kubelet[2564]: W0421 10:22:33.710776 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.710828 kubelet[2564]: E0421 10:22:33.710819 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.711087 kubelet[2564]: E0421 10:22:33.711076 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.711138 kubelet[2564]: W0421 10:22:33.711128 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.711207 kubelet[2564]: E0421 10:22:33.711196 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.711606 kubelet[2564]: E0421 10:22:33.711596 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.711664 kubelet[2564]: W0421 10:22:33.711653 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.711704 kubelet[2564]: E0421 10:22:33.711696 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.711959 kubelet[2564]: E0421 10:22:33.711948 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.712018 kubelet[2564]: W0421 10:22:33.712008 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.712329 kubelet[2564]: E0421 10:22:33.712312 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.712559 kubelet[2564]: E0421 10:22:33.712546 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.712626 kubelet[2564]: W0421 10:22:33.712616 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.712676 kubelet[2564]: E0421 10:22:33.712662 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.712945 kubelet[2564]: E0421 10:22:33.712932 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.713002 kubelet[2564]: W0421 10:22:33.712991 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.713126 kubelet[2564]: E0421 10:22:33.713110 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.713715 kubelet[2564]: E0421 10:22:33.713702 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.713828 kubelet[2564]: W0421 10:22:33.713766 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.713828 kubelet[2564]: E0421 10:22:33.713779 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.714133 kubelet[2564]: E0421 10:22:33.714121 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.714432 kubelet[2564]: W0421 10:22:33.714177 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.714432 kubelet[2564]: E0421 10:22:33.714189 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.714701 kubelet[2564]: E0421 10:22:33.714690 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.714751 kubelet[2564]: W0421 10:22:33.714741 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.714793 kubelet[2564]: E0421 10:22:33.714783 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.715146 kubelet[2564]: E0421 10:22:33.715135 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.715205 kubelet[2564]: W0421 10:22:33.715194 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.715265 kubelet[2564]: E0421 10:22:33.715249 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.715755 kubelet[2564]: E0421 10:22:33.715743 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.715816 kubelet[2564]: W0421 10:22:33.715806 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.715950 kubelet[2564]: E0421 10:22:33.715857 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.716229 kubelet[2564]: E0421 10:22:33.716218 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.716278 kubelet[2564]: W0421 10:22:33.716268 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.716337 kubelet[2564]: E0421 10:22:33.716326 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.761297 containerd[1457]: time="2026-04-21T10:22:33.761252211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599dd45b-x7c9w,Uid:a2461b41-a210-454f-adfd-8b2467ffd6b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"28c1fafe1f0560cef0b0dbe63f1d39f84d52d33591fee746f7bfbe1df8fb1365\"" Apr 21 10:22:33.762554 kubelet[2564]: E0421 10:22:33.762521 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:33.763733 containerd[1457]: time="2026-04-21T10:22:33.763710881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:22:33.778782 kubelet[2564]: E0421 10:22:33.778758 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.778884 kubelet[2564]: W0421 10:22:33.778871 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.778967 kubelet[2564]: E0421 10:22:33.778953 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.779681 containerd[1457]: time="2026-04-21T10:22:33.779315922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5cd7d,Uid:0d06d640-6197-4c6f-ab12-90d0cf7a9cae,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:33.780340 kubelet[2564]: E0421 10:22:33.780326 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.780490 kubelet[2564]: W0421 10:22:33.780456 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.780800 kubelet[2564]: E0421 10:22:33.780670 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.781314 kubelet[2564]: E0421 10:22:33.781103 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.781314 kubelet[2564]: W0421 10:22:33.781117 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.781314 kubelet[2564]: E0421 10:22:33.781127 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.781717 kubelet[2564]: E0421 10:22:33.781704 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.781973 kubelet[2564]: W0421 10:22:33.781852 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.781973 kubelet[2564]: E0421 10:22:33.781868 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.782447 kubelet[2564]: E0421 10:22:33.782266 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.782447 kubelet[2564]: W0421 10:22:33.782278 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.782447 kubelet[2564]: E0421 10:22:33.782309 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.783031 kubelet[2564]: E0421 10:22:33.782902 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.783031 kubelet[2564]: W0421 10:22:33.782913 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.783031 kubelet[2564]: E0421 10:22:33.782923 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.783301 kubelet[2564]: E0421 10:22:33.783265 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.783301 kubelet[2564]: W0421 10:22:33.783275 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.783301 kubelet[2564]: E0421 10:22:33.783284 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.783886 kubelet[2564]: E0421 10:22:33.783782 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.783886 kubelet[2564]: W0421 10:22:33.783795 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.784285 kubelet[2564]: E0421 10:22:33.784100 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.784654 kubelet[2564]: E0421 10:22:33.784538 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.784654 kubelet[2564]: W0421 10:22:33.784550 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.784654 kubelet[2564]: E0421 10:22:33.784559 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.785131 kubelet[2564]: E0421 10:22:33.784872 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.785131 kubelet[2564]: W0421 10:22:33.784883 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.785131 kubelet[2564]: E0421 10:22:33.784936 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.787331 kubelet[2564]: E0421 10:22:33.786392 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.787331 kubelet[2564]: W0421 10:22:33.786404 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.787331 kubelet[2564]: E0421 10:22:33.786415 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.787830 kubelet[2564]: E0421 10:22:33.787511 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.787830 kubelet[2564]: W0421 10:22:33.787523 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.787830 kubelet[2564]: E0421 10:22:33.787532 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.787981 kubelet[2564]: E0421 10:22:33.787968 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.788152 kubelet[2564]: W0421 10:22:33.788128 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.788292 kubelet[2564]: E0421 10:22:33.788236 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.788778 kubelet[2564]: E0421 10:22:33.788658 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.788778 kubelet[2564]: W0421 10:22:33.788669 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.788778 kubelet[2564]: E0421 10:22:33.788678 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.789104 kubelet[2564]: E0421 10:22:33.789000 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.789104 kubelet[2564]: W0421 10:22:33.789010 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.789104 kubelet[2564]: E0421 10:22:33.789019 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.790042 kubelet[2564]: E0421 10:22:33.789732 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.790042 kubelet[2564]: W0421 10:22:33.789743 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.790042 kubelet[2564]: E0421 10:22:33.789752 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.790181 kubelet[2564]: E0421 10:22:33.790155 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.790181 kubelet[2564]: W0421 10:22:33.790169 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.790458 kubelet[2564]: E0421 10:22:33.790188 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.790992 kubelet[2564]: E0421 10:22:33.790853 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.790992 kubelet[2564]: W0421 10:22:33.790866 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.790992 kubelet[2564]: E0421 10:22:33.790875 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.791464 kubelet[2564]: E0421 10:22:33.791351 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.791464 kubelet[2564]: W0421 10:22:33.791362 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.791464 kubelet[2564]: E0421 10:22:33.791371 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.791642 kubelet[2564]: E0421 10:22:33.791618 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.791642 kubelet[2564]: W0421 10:22:33.791626 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.791642 kubelet[2564]: E0421 10:22:33.791634 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.792455 kubelet[2564]: E0421 10:22:33.791977 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.792455 kubelet[2564]: W0421 10:22:33.791986 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.792455 kubelet[2564]: E0421 10:22:33.791994 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.793039 kubelet[2564]: E0421 10:22:33.792980 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.793039 kubelet[2564]: W0421 10:22:33.792992 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.793039 kubelet[2564]: E0421 10:22:33.793003 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.793860 kubelet[2564]: E0421 10:22:33.793829 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.793860 kubelet[2564]: W0421 10:22:33.793846 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.793860 kubelet[2564]: E0421 10:22:33.793856 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.794985 kubelet[2564]: E0421 10:22:33.794750 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.794985 kubelet[2564]: W0421 10:22:33.794764 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.794985 kubelet[2564]: E0421 10:22:33.794774 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.796291 kubelet[2564]: E0421 10:22:33.796262 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.796291 kubelet[2564]: W0421 10:22:33.796278 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.796291 kubelet[2564]: E0421 10:22:33.796287 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.802914 kubelet[2564]: E0421 10:22:33.802887 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:33.802914 kubelet[2564]: W0421 10:22:33.802906 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:33.802914 kubelet[2564]: E0421 10:22:33.802917 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:33.809652 containerd[1457]: time="2026-04-21T10:22:33.809558706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:33.809652 containerd[1457]: time="2026-04-21T10:22:33.809612728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:33.809886 containerd[1457]: time="2026-04-21T10:22:33.809760552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:33.810437 containerd[1457]: time="2026-04-21T10:22:33.810177504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:33.837272 systemd[1]: Started cri-containerd-e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d.scope - libcontainer container e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d. Apr 21 10:22:33.864240 containerd[1457]: time="2026-04-21T10:22:33.864173909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5cd7d,Uid:0d06d640-6197-4c6f-ab12-90d0cf7a9cae,Namespace:calico-system,Attempt:0,} returns sandbox id \"e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d\"" Apr 21 10:22:34.504435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500135624.mount: Deactivated successfully. Apr 21 10:22:34.980179 containerd[1457]: time="2026-04-21T10:22:34.980131519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:34.980991 containerd[1457]: time="2026-04-21T10:22:34.980881479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:22:34.982540 containerd[1457]: time="2026-04-21T10:22:34.981422363Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:34.984008 containerd[1457]: time="2026-04-21T10:22:34.983253062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:34.984008 containerd[1457]: time="2026-04-21T10:22:34.983914379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.220075975s" Apr 21 10:22:34.984008 containerd[1457]: time="2026-04-21T10:22:34.983941400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:22:34.985537 containerd[1457]: time="2026-04-21T10:22:34.985500701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:22:35.000355 containerd[1457]: time="2026-04-21T10:22:35.000323154Z" level=info msg="CreateContainer within sandbox \"28c1fafe1f0560cef0b0dbe63f1d39f84d52d33591fee746f7bfbe1df8fb1365\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:22:35.009564 containerd[1457]: time="2026-04-21T10:22:35.009517473Z" level=info msg="CreateContainer within sandbox \"28c1fafe1f0560cef0b0dbe63f1d39f84d52d33591fee746f7bfbe1df8fb1365\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c51f7af04ccee444e9d38ddeff20ae4b43e9279c807046b34b2f15fe0b36097e\"" Apr 21 10:22:35.010295 containerd[1457]: time="2026-04-21T10:22:35.010254962Z" level=info msg="StartContainer for \"c51f7af04ccee444e9d38ddeff20ae4b43e9279c807046b34b2f15fe0b36097e\"" Apr 21 10:22:35.037199 systemd[1]: Started cri-containerd-c51f7af04ccee444e9d38ddeff20ae4b43e9279c807046b34b2f15fe0b36097e.scope - libcontainer container c51f7af04ccee444e9d38ddeff20ae4b43e9279c807046b34b2f15fe0b36097e. Apr 21 10:22:35.081333 containerd[1457]: time="2026-04-21T10:22:35.081295016Z" level=info msg="StartContainer for \"c51f7af04ccee444e9d38ddeff20ae4b43e9279c807046b34b2f15fe0b36097e\" returns successfully" Apr 21 10:22:35.678095 kubelet[2564]: E0421 10:22:35.677705 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84zl9" podUID="0efc2a75-1741-47b4-a6a6-a697ca685699" Apr 21 10:22:35.697437 containerd[1457]: time="2026-04-21T10:22:35.697373534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:35.698196 containerd[1457]: time="2026-04-21T10:22:35.698159483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:22:35.701718 containerd[1457]: time="2026-04-21T10:22:35.701666670Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:35.703458 containerd[1457]: time="2026-04-21T10:22:35.703421714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:35.704187 containerd[1457]: time="2026-04-21T10:22:35.704086360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 718.559518ms" Apr 21 10:22:35.704187 containerd[1457]: time="2026-04-21T10:22:35.704114401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:22:35.707309 containerd[1457]: time="2026-04-21T10:22:35.707234539Z" level=info msg="CreateContainer within sandbox \"e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:22:35.717005 containerd[1457]: time="2026-04-21T10:22:35.716026657Z" level=info msg="CreateContainer within sandbox \"e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"455c55311411cacc8f1140a0948ece557970f260cf4de93dcb80080ff9577b80\"" Apr 21 10:22:35.718622 containerd[1457]: time="2026-04-21T10:22:35.718549560Z" level=info msg="StartContainer for \"455c55311411cacc8f1140a0948ece557970f260cf4de93dcb80080ff9577b80\"" Apr 21 10:22:35.743598 kubelet[2564]: E0421 10:22:35.743552 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:35.766202 systemd[1]: Started cri-containerd-455c55311411cacc8f1140a0948ece557970f260cf4de93dcb80080ff9577b80.scope - libcontainer container 455c55311411cacc8f1140a0948ece557970f260cf4de93dcb80080ff9577b80. Apr 21 10:22:35.767611 kubelet[2564]: E0421 10:22:35.767447 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.767611 kubelet[2564]: W0421 10:22:35.767464 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.767611 kubelet[2564]: E0421 10:22:35.767482 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.767892 kubelet[2564]: E0421 10:22:35.767872 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.767892 kubelet[2564]: W0421 10:22:35.767887 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.767950 kubelet[2564]: E0421 10:22:35.767896 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.768129 kubelet[2564]: E0421 10:22:35.768110 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.768129 kubelet[2564]: W0421 10:22:35.768124 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.768204 kubelet[2564]: E0421 10:22:35.768132 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.768426 kubelet[2564]: E0421 10:22:35.768397 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.768468 kubelet[2564]: W0421 10:22:35.768434 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.768468 kubelet[2564]: E0421 10:22:35.768444 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.768757 kubelet[2564]: E0421 10:22:35.768715 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.768757 kubelet[2564]: W0421 10:22:35.768727 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.768757 kubelet[2564]: E0421 10:22:35.768736 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.769030 kubelet[2564]: E0421 10:22:35.769013 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.769030 kubelet[2564]: W0421 10:22:35.769026 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.769105 kubelet[2564]: E0421 10:22:35.769034 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.769321 kubelet[2564]: E0421 10:22:35.769292 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.769321 kubelet[2564]: W0421 10:22:35.769305 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.769382 kubelet[2564]: E0421 10:22:35.769313 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.769593 kubelet[2564]: E0421 10:22:35.769578 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.769593 kubelet[2564]: W0421 10:22:35.769591 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.769660 kubelet[2564]: E0421 10:22:35.769599 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.769984 kubelet[2564]: E0421 10:22:35.769941 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.769984 kubelet[2564]: W0421 10:22:35.769974 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.769984 kubelet[2564]: E0421 10:22:35.769983 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.770284 kubelet[2564]: E0421 10:22:35.770265 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.770284 kubelet[2564]: W0421 10:22:35.770276 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.770284 kubelet[2564]: E0421 10:22:35.770284 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.771083 kubelet[2564]: E0421 10:22:35.770542 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.771083 kubelet[2564]: W0421 10:22:35.770551 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.771083 kubelet[2564]: E0421 10:22:35.770560 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.771083 kubelet[2564]: E0421 10:22:35.770859 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.771083 kubelet[2564]: W0421 10:22:35.770868 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.771083 kubelet[2564]: E0421 10:22:35.770875 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.771253 kubelet[2564]: E0421 10:22:35.771231 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.771281 kubelet[2564]: W0421 10:22:35.771263 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.771281 kubelet[2564]: E0421 10:22:35.771272 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.771509 kubelet[2564]: E0421 10:22:35.771491 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.771509 kubelet[2564]: W0421 10:22:35.771503 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.771578 kubelet[2564]: E0421 10:22:35.771529 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.771838 kubelet[2564]: E0421 10:22:35.771822 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.771838 kubelet[2564]: W0421 10:22:35.771836 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.771904 kubelet[2564]: E0421 10:22:35.771844 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.796473 kubelet[2564]: E0421 10:22:35.796190 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.796473 kubelet[2564]: W0421 10:22:35.796208 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.797389 kubelet[2564]: E0421 10:22:35.797371 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.797830 kubelet[2564]: E0421 10:22:35.797785 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.797830 kubelet[2564]: W0421 10:22:35.797800 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.797830 kubelet[2564]: E0421 10:22:35.797811 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.799082 containerd[1457]: time="2026-04-21T10:22:35.798979797Z" level=info msg="StartContainer for \"455c55311411cacc8f1140a0948ece557970f260cf4de93dcb80080ff9577b80\" returns successfully" Apr 21 10:22:35.799166 kubelet[2564]: E0421 10:22:35.799027 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.799166 kubelet[2564]: W0421 10:22:35.799036 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.800017 kubelet[2564]: E0421 10:22:35.799997 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.801390 kubelet[2564]: E0421 10:22:35.801368 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.801447 kubelet[2564]: W0421 10:22:35.801395 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.801447 kubelet[2564]: E0421 10:22:35.801406 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.801788 kubelet[2564]: E0421 10:22:35.801767 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.801788 kubelet[2564]: W0421 10:22:35.801779 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.801788 kubelet[2564]: E0421 10:22:35.801787 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.804262 kubelet[2564]: E0421 10:22:35.804224 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.804262 kubelet[2564]: W0421 10:22:35.804239 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.804262 kubelet[2564]: E0421 10:22:35.804251 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.804816 kubelet[2564]: E0421 10:22:35.804797 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.804882 kubelet[2564]: W0421 10:22:35.804864 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.804882 kubelet[2564]: E0421 10:22:35.804881 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.805857 kubelet[2564]: E0421 10:22:35.805840 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.805905 kubelet[2564]: W0421 10:22:35.805854 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.805937 kubelet[2564]: E0421 10:22:35.805905 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.806156 kubelet[2564]: E0421 10:22:35.806137 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.806156 kubelet[2564]: W0421 10:22:35.806152 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.806223 kubelet[2564]: E0421 10:22:35.806161 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.806695 kubelet[2564]: E0421 10:22:35.806676 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.806695 kubelet[2564]: W0421 10:22:35.806690 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.806753 kubelet[2564]: E0421 10:22:35.806720 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.807153 kubelet[2564]: E0421 10:22:35.807136 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.807153 kubelet[2564]: W0421 10:22:35.807149 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.807237 kubelet[2564]: E0421 10:22:35.807158 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.807962 kubelet[2564]: E0421 10:22:35.807717 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.807962 kubelet[2564]: W0421 10:22:35.807788 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.807962 kubelet[2564]: E0421 10:22:35.807797 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.808110 kubelet[2564]: E0421 10:22:35.808089 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.808357 kubelet[2564]: W0421 10:22:35.808102 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.808357 kubelet[2564]: E0421 10:22:35.808133 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.808960 kubelet[2564]: E0421 10:22:35.808939 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.809024 kubelet[2564]: W0421 10:22:35.809005 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.809085 kubelet[2564]: E0421 10:22:35.809021 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.809711 kubelet[2564]: E0421 10:22:35.809686 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.809777 kubelet[2564]: W0421 10:22:35.809759 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.809815 kubelet[2564]: E0421 10:22:35.809799 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.810289 kubelet[2564]: E0421 10:22:35.810271 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.810289 kubelet[2564]: W0421 10:22:35.810285 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.810340 kubelet[2564]: E0421 10:22:35.810293 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.810743 kubelet[2564]: E0421 10:22:35.810720 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.810743 kubelet[2564]: W0421 10:22:35.810735 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.810743 kubelet[2564]: E0421 10:22:35.810743 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.810984 kubelet[2564]: E0421 10:22:35.810961 2564 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:35.810984 kubelet[2564]: W0421 10:22:35.810977 2564 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:35.811103 kubelet[2564]: E0421 10:22:35.810986 2564 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:35.817191 systemd[1]: cri-containerd-455c55311411cacc8f1140a0948ece557970f260cf4de93dcb80080ff9577b80.scope: Deactivated successfully. Apr 21 10:22:35.840638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-455c55311411cacc8f1140a0948ece557970f260cf4de93dcb80080ff9577b80-rootfs.mount: Deactivated successfully. Apr 21 10:22:35.970603 containerd[1457]: time="2026-04-21T10:22:35.968634080Z" level=info msg="shim disconnected" id=455c55311411cacc8f1140a0948ece557970f260cf4de93dcb80080ff9577b80 namespace=k8s.io Apr 21 10:22:35.970603 containerd[1457]: time="2026-04-21T10:22:35.968703761Z" level=warning msg="cleaning up after shim disconnected" id=455c55311411cacc8f1140a0948ece557970f260cf4de93dcb80080ff9577b80 namespace=k8s.io Apr 21 10:22:35.970603 containerd[1457]: time="2026-04-21T10:22:35.968717112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:22:36.745854 kubelet[2564]: I0421 10:22:36.745793 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:36.746443 kubelet[2564]: E0421 10:22:36.746100 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:36.747587 containerd[1457]: time="2026-04-21T10:22:36.747273866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:22:36.762541 kubelet[2564]: I0421 10:22:36.761742 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-599dd45b-x7c9w" podStartSLOduration=2.540117897 podStartE2EDuration="3.761722073s" podCreationTimestamp="2026-04-21 10:22:33 +0000 UTC" firstStartedPulling="2026-04-21 10:22:33.763246858 +0000 UTC m=+16.195392621" lastFinishedPulling="2026-04-21 10:22:34.984851054 +0000 UTC m=+17.416996797" observedRunningTime="2026-04-21 10:22:35.761173268 +0000 UTC m=+18.193319021" watchObservedRunningTime="2026-04-21 10:22:36.761722073 +0000 UTC m=+19.193867826" Apr 21 10:22:37.678077 kubelet[2564]: E0421 10:22:37.677991 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84zl9" podUID="0efc2a75-1741-47b4-a6a6-a697ca685699" Apr 21 10:22:37.974495 kubelet[2564]: I0421 10:22:37.973240 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:37.974495 kubelet[2564]: E0421 10:22:37.973597 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:38.750837 kubelet[2564]: E0421 10:22:38.750809 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:39.679099 kubelet[2564]: E0421 10:22:39.678633 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84zl9" podUID="0efc2a75-1741-47b4-a6a6-a697ca685699" Apr 21 10:22:40.555694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625389152.mount: Deactivated successfully. Apr 21 10:22:40.592394 containerd[1457]: time="2026-04-21T10:22:40.592289101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:40.593205 containerd[1457]: time="2026-04-21T10:22:40.593160827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:22:40.594152 containerd[1457]: time="2026-04-21T10:22:40.594100094Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:40.596185 containerd[1457]: time="2026-04-21T10:22:40.596145681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:40.597728 containerd[1457]: time="2026-04-21T10:22:40.597276801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.849968264s" Apr 21 10:22:40.597728 containerd[1457]: time="2026-04-21T10:22:40.597312492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:22:40.602377 containerd[1457]: time="2026-04-21T10:22:40.602249080Z" level=info msg="CreateContainer within sandbox \"e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:22:40.621902 containerd[1457]: time="2026-04-21T10:22:40.621857923Z" level=info msg="CreateContainer within sandbox \"e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"da25e235af0d773f92fdc0dd847b1201adfa9befb363f8fc8020b3787ae43a19\"" Apr 21 10:22:40.623434 containerd[1457]: time="2026-04-21T10:22:40.622560736Z" level=info msg="StartContainer for \"da25e235af0d773f92fdc0dd847b1201adfa9befb363f8fc8020b3787ae43a19\"" Apr 21 10:22:40.664433 systemd[1]: run-containerd-runc-k8s.io-da25e235af0d773f92fdc0dd847b1201adfa9befb363f8fc8020b3787ae43a19-runc.4tNgbU.mount: Deactivated successfully. Apr 21 10:22:40.676276 systemd[1]: Started cri-containerd-da25e235af0d773f92fdc0dd847b1201adfa9befb363f8fc8020b3787ae43a19.scope - libcontainer container da25e235af0d773f92fdc0dd847b1201adfa9befb363f8fc8020b3787ae43a19. Apr 21 10:22:40.713836 containerd[1457]: time="2026-04-21T10:22:40.713788976Z" level=info msg="StartContainer for \"da25e235af0d773f92fdc0dd847b1201adfa9befb363f8fc8020b3787ae43a19\" returns successfully" Apr 21 10:22:40.773938 systemd[1]: cri-containerd-da25e235af0d773f92fdc0dd847b1201adfa9befb363f8fc8020b3787ae43a19.scope: Deactivated successfully. Apr 21 10:22:40.975107 containerd[1457]: time="2026-04-21T10:22:40.974898672Z" level=info msg="shim disconnected" id=da25e235af0d773f92fdc0dd847b1201adfa9befb363f8fc8020b3787ae43a19 namespace=k8s.io Apr 21 10:22:40.975107 containerd[1457]: time="2026-04-21T10:22:40.974972013Z" level=warning msg="cleaning up after shim disconnected" id=da25e235af0d773f92fdc0dd847b1201adfa9befb363f8fc8020b3787ae43a19 namespace=k8s.io Apr 21 10:22:40.975107 containerd[1457]: time="2026-04-21T10:22:40.974982853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:22:40.992127 containerd[1457]: time="2026-04-21T10:22:40.991442009Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:22:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:22:41.552810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da25e235af0d773f92fdc0dd847b1201adfa9befb363f8fc8020b3787ae43a19-rootfs.mount: Deactivated successfully. Apr 21 10:22:41.678470 kubelet[2564]: E0421 10:22:41.677226 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84zl9" podUID="0efc2a75-1741-47b4-a6a6-a697ca685699" Apr 21 10:22:41.763254 containerd[1457]: time="2026-04-21T10:22:41.763178621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:22:43.562826 containerd[1457]: time="2026-04-21T10:22:43.562772150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:43.563876 containerd[1457]: time="2026-04-21T10:22:43.563832126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:22:43.564458 containerd[1457]: time="2026-04-21T10:22:43.564412675Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:43.567074 containerd[1457]: time="2026-04-21T10:22:43.566426784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:43.567991 containerd[1457]: time="2026-04-21T10:22:43.567356478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.804111716s" Apr 21 10:22:43.567991 containerd[1457]: time="2026-04-21T10:22:43.567384979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:22:43.572044 containerd[1457]: time="2026-04-21T10:22:43.572013587Z" level=info msg="CreateContainer within sandbox \"e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:22:43.598920 containerd[1457]: time="2026-04-21T10:22:43.598880795Z" level=info msg="CreateContainer within sandbox \"e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"da675c697f118fa2073da86623858193d81681d266294e757cb4fda00389b055\"" Apr 21 10:22:43.599673 containerd[1457]: time="2026-04-21T10:22:43.599646557Z" level=info msg="StartContainer for \"da675c697f118fa2073da86623858193d81681d266294e757cb4fda00389b055\"" Apr 21 10:22:43.651189 systemd[1]: Started cri-containerd-da675c697f118fa2073da86623858193d81681d266294e757cb4fda00389b055.scope - libcontainer container da675c697f118fa2073da86623858193d81681d266294e757cb4fda00389b055. Apr 21 10:22:43.676885 kubelet[2564]: E0421 10:22:43.676840 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84zl9" podUID="0efc2a75-1741-47b4-a6a6-a697ca685699" Apr 21 10:22:43.682126 containerd[1457]: time="2026-04-21T10:22:43.681266676Z" level=info msg="StartContainer for \"da675c697f118fa2073da86623858193d81681d266294e757cb4fda00389b055\" returns successfully" Apr 21 10:22:44.178024 containerd[1457]: time="2026-04-21T10:22:44.177846620Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:22:44.180268 systemd[1]: cri-containerd-da675c697f118fa2073da86623858193d81681d266294e757cb4fda00389b055.scope: Deactivated successfully. Apr 21 10:22:44.207675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da675c697f118fa2073da86623858193d81681d266294e757cb4fda00389b055-rootfs.mount: Deactivated successfully. Apr 21 10:22:44.250146 kubelet[2564]: I0421 10:22:44.249672 2564 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 21 10:22:44.262266 containerd[1457]: time="2026-04-21T10:22:44.261882368Z" level=info msg="shim disconnected" id=da675c697f118fa2073da86623858193d81681d266294e757cb4fda00389b055 namespace=k8s.io Apr 21 10:22:44.262266 containerd[1457]: time="2026-04-21T10:22:44.261929968Z" level=warning msg="cleaning up after shim disconnected" id=da675c697f118fa2073da86623858193d81681d266294e757cb4fda00389b055 namespace=k8s.io Apr 21 10:22:44.262266 containerd[1457]: time="2026-04-21T10:22:44.261939149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:22:44.300383 systemd[1]: Created slice kubepods-besteffort-podee06722d_90c6_4155_8731_f921f1677bd3.slice - libcontainer container kubepods-besteffort-podee06722d_90c6_4155_8731_f921f1677bd3.slice. Apr 21 10:22:44.321969 systemd[1]: Created slice kubepods-burstable-podbfea7c85_36f8_4c6c_8806_1d5305a3e058.slice - libcontainer container kubepods-burstable-podbfea7c85_36f8_4c6c_8806_1d5305a3e058.slice. Apr 21 10:22:44.335523 systemd[1]: Created slice kubepods-besteffort-pod103eb8aa_b6c6_480d_a370_786769ae65a2.slice - libcontainer container kubepods-besteffort-pod103eb8aa_b6c6_480d_a370_786769ae65a2.slice. Apr 21 10:22:44.346350 systemd[1]: Created slice kubepods-besteffort-pod2a413923_7171_4cd9_86b6_66566674315f.slice - libcontainer container kubepods-besteffort-pod2a413923_7171_4cd9_86b6_66566674315f.slice. Apr 21 10:22:44.358287 systemd[1]: Created slice kubepods-besteffort-pod11bf43e9_9c35_41d9_833a_203a96bf4b43.slice - libcontainer container kubepods-besteffort-pod11bf43e9_9c35_41d9_833a_203a96bf4b43.slice. Apr 21 10:22:44.361919 kubelet[2564]: I0421 10:22:44.361878 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4453451-982f-4725-af74-0e6e82cae9ec-config-volume\") pod \"coredns-66bc5c9577-gk6tg\" (UID: \"d4453451-982f-4725-af74-0e6e82cae9ec\") " pod="kube-system/coredns-66bc5c9577-gk6tg" Apr 21 10:22:44.361919 kubelet[2564]: I0421 10:22:44.361912 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjf2\" (UniqueName: \"kubernetes.io/projected/103eb8aa-b6c6-480d-a370-786769ae65a2-kube-api-access-5cjf2\") pod \"calico-apiserver-7c885d8ccf-njvhb\" (UID: \"103eb8aa-b6c6-480d-a370-786769ae65a2\") " pod="calico-system/calico-apiserver-7c885d8ccf-njvhb" Apr 21 10:22:44.362102 kubelet[2564]: I0421 10:22:44.361931 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a413923-7171-4cd9-86b6-66566674315f-tigera-ca-bundle\") pod \"calico-kube-controllers-5c59f49bff-8gnf6\" (UID: \"2a413923-7171-4cd9-86b6-66566674315f\") " pod="calico-system/calico-kube-controllers-5c59f49bff-8gnf6" Apr 21 10:22:44.362102 kubelet[2564]: I0421 10:22:44.361948 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11bf43e9-9c35-41d9-833a-203a96bf4b43-config\") pod \"goldmane-cccfbd5cf-sp5tn\" (UID: \"11bf43e9-9c35-41d9-833a-203a96bf4b43\") " pod="calico-system/goldmane-cccfbd5cf-sp5tn" Apr 21 10:22:44.362102 kubelet[2564]: I0421 10:22:44.361961 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11bf43e9-9c35-41d9-833a-203a96bf4b43-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-sp5tn\" (UID: \"11bf43e9-9c35-41d9-833a-203a96bf4b43\") " pod="calico-system/goldmane-cccfbd5cf-sp5tn" Apr 21 10:22:44.362102 kubelet[2564]: I0421 10:22:44.361976 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rf8c\" (UniqueName: \"kubernetes.io/projected/d4453451-982f-4725-af74-0e6e82cae9ec-kube-api-access-9rf8c\") pod \"coredns-66bc5c9577-gk6tg\" (UID: \"d4453451-982f-4725-af74-0e6e82cae9ec\") " pod="kube-system/coredns-66bc5c9577-gk6tg" Apr 21 10:22:44.362102 kubelet[2564]: I0421 10:22:44.361992 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjg8w\" (UniqueName: \"kubernetes.io/projected/ee06722d-90c6-4155-8731-f921f1677bd3-kube-api-access-qjg8w\") pod \"whisker-cd97785b7-rw8lm\" (UID: \"ee06722d-90c6-4155-8731-f921f1677bd3\") " pod="calico-system/whisker-cd97785b7-rw8lm" Apr 21 10:22:44.362289 kubelet[2564]: I0421 10:22:44.362011 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0-calico-apiserver-certs\") pod \"calico-apiserver-7c885d8ccf-6qxjm\" (UID: \"e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0\") " pod="calico-system/calico-apiserver-7c885d8ccf-6qxjm" Apr 21 10:22:44.363535 kubelet[2564]: I0421 10:22:44.362030 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ee06722d-90c6-4155-8731-f921f1677bd3-nginx-config\") pod \"whisker-cd97785b7-rw8lm\" (UID: \"ee06722d-90c6-4155-8731-f921f1677bd3\") " pod="calico-system/whisker-cd97785b7-rw8lm" Apr 21 10:22:44.363582 kubelet[2564]: I0421 10:22:44.363549 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee06722d-90c6-4155-8731-f921f1677bd3-whisker-backend-key-pair\") pod \"whisker-cd97785b7-rw8lm\" (UID: \"ee06722d-90c6-4155-8731-f921f1677bd3\") " pod="calico-system/whisker-cd97785b7-rw8lm" Apr 21 10:22:44.363582 kubelet[2564]: I0421 10:22:44.363565 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2m8g\" (UniqueName: \"kubernetes.io/projected/e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0-kube-api-access-x2m8g\") pod \"calico-apiserver-7c885d8ccf-6qxjm\" (UID: \"e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0\") " pod="calico-system/calico-apiserver-7c885d8ccf-6qxjm" Apr 21 10:22:44.363637 kubelet[2564]: I0421 10:22:44.363589 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/103eb8aa-b6c6-480d-a370-786769ae65a2-calico-apiserver-certs\") pod \"calico-apiserver-7c885d8ccf-njvhb\" (UID: \"103eb8aa-b6c6-480d-a370-786769ae65a2\") " pod="calico-system/calico-apiserver-7c885d8ccf-njvhb" Apr 21 10:22:44.363637 kubelet[2564]: I0421 10:22:44.363602 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/11bf43e9-9c35-41d9-833a-203a96bf4b43-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-sp5tn\" (UID: \"11bf43e9-9c35-41d9-833a-203a96bf4b43\") " pod="calico-system/goldmane-cccfbd5cf-sp5tn" Apr 21 10:22:44.363637 kubelet[2564]: I0421 10:22:44.363614 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlrp\" (UniqueName: \"kubernetes.io/projected/11bf43e9-9c35-41d9-833a-203a96bf4b43-kube-api-access-fzlrp\") pod \"goldmane-cccfbd5cf-sp5tn\" (UID: \"11bf43e9-9c35-41d9-833a-203a96bf4b43\") " pod="calico-system/goldmane-cccfbd5cf-sp5tn" Apr 21 10:22:44.363637 kubelet[2564]: I0421 10:22:44.363633 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfea7c85-36f8-4c6c-8806-1d5305a3e058-config-volume\") pod \"coredns-66bc5c9577-hmkdz\" (UID: \"bfea7c85-36f8-4c6c-8806-1d5305a3e058\") " pod="kube-system/coredns-66bc5c9577-hmkdz" Apr 21 10:22:44.363722 kubelet[2564]: I0421 10:22:44.363648 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht5rp\" (UniqueName: \"kubernetes.io/projected/bfea7c85-36f8-4c6c-8806-1d5305a3e058-kube-api-access-ht5rp\") pod \"coredns-66bc5c9577-hmkdz\" (UID: \"bfea7c85-36f8-4c6c-8806-1d5305a3e058\") " pod="kube-system/coredns-66bc5c9577-hmkdz" Apr 21 10:22:44.363722 kubelet[2564]: I0421 10:22:44.363662 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkcs6\" (UniqueName: \"kubernetes.io/projected/2a413923-7171-4cd9-86b6-66566674315f-kube-api-access-tkcs6\") pod \"calico-kube-controllers-5c59f49bff-8gnf6\" (UID: \"2a413923-7171-4cd9-86b6-66566674315f\") " pod="calico-system/calico-kube-controllers-5c59f49bff-8gnf6" Apr 21 10:22:44.363722 kubelet[2564]: I0421 10:22:44.363679 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee06722d-90c6-4155-8731-f921f1677bd3-whisker-ca-bundle\") pod \"whisker-cd97785b7-rw8lm\" (UID: \"ee06722d-90c6-4155-8731-f921f1677bd3\") " pod="calico-system/whisker-cd97785b7-rw8lm" Apr 21 10:22:44.370310 systemd[1]: Created slice kubepods-burstable-podd4453451_982f_4725_af74_0e6e82cae9ec.slice - libcontainer container kubepods-burstable-podd4453451_982f_4725_af74_0e6e82cae9ec.slice. Apr 21 10:22:44.373936 systemd[1]: Created slice kubepods-besteffort-pode20b56d9_ea94_4623_85b6_5cdc1ee6c0b0.slice - libcontainer container kubepods-besteffort-pode20b56d9_ea94_4623_85b6_5cdc1ee6c0b0.slice. Apr 21 10:22:44.620453 containerd[1457]: time="2026-04-21T10:22:44.620417018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd97785b7-rw8lm,Uid:ee06722d-90c6-4155-8731-f921f1677bd3,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:44.635921 kubelet[2564]: E0421 10:22:44.635385 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:44.637817 containerd[1457]: time="2026-04-21T10:22:44.637779409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hmkdz,Uid:bfea7c85-36f8-4c6c-8806-1d5305a3e058,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:44.643719 containerd[1457]: time="2026-04-21T10:22:44.643692711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c885d8ccf-njvhb,Uid:103eb8aa-b6c6-480d-a370-786769ae65a2,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:44.659642 containerd[1457]: time="2026-04-21T10:22:44.659497931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c59f49bff-8gnf6,Uid:2a413923-7171-4cd9-86b6-66566674315f,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:44.667466 containerd[1457]: time="2026-04-21T10:22:44.667212318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-sp5tn,Uid:11bf43e9-9c35-41d9-833a-203a96bf4b43,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:44.678987 kubelet[2564]: E0421 10:22:44.678959 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:44.681463 containerd[1457]: time="2026-04-21T10:22:44.681192492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c885d8ccf-6qxjm,Uid:e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:44.681794 containerd[1457]: time="2026-04-21T10:22:44.681764250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gk6tg,Uid:d4453451-982f-4725-af74-0e6e82cae9ec,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:44.814881 containerd[1457]: time="2026-04-21T10:22:44.814839219Z" level=info msg="CreateContainer within sandbox \"e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:22:44.828779 containerd[1457]: time="2026-04-21T10:22:44.828720872Z" level=info msg="CreateContainer within sandbox \"e61fb8779f0a70f2e2d62377bcaeff77fe8553b462510b40a1c495d8a9a45e9d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7df4a98b04cc4a1266211aa8d0bb7da4f652a4c3b622a8f45874dc3b74eb71d3\"" Apr 21 10:22:44.830190 containerd[1457]: time="2026-04-21T10:22:44.830165002Z" level=info msg="StartContainer for \"7df4a98b04cc4a1266211aa8d0bb7da4f652a4c3b622a8f45874dc3b74eb71d3\"" Apr 21 10:22:44.851264 containerd[1457]: time="2026-04-21T10:22:44.851221784Z" level=error msg="Failed to destroy network for sandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.851655 containerd[1457]: time="2026-04-21T10:22:44.851586359Z" level=error msg="encountered an error cleaning up failed sandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.851655 containerd[1457]: time="2026-04-21T10:22:44.851638900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd97785b7-rw8lm,Uid:ee06722d-90c6-4155-8731-f921f1677bd3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.856317 kubelet[2564]: E0421 10:22:44.856272 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.856395 kubelet[2564]: E0421 10:22:44.856358 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cd97785b7-rw8lm" Apr 21 10:22:44.856395 kubelet[2564]: E0421 10:22:44.856378 2564 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cd97785b7-rw8lm" Apr 21 10:22:44.856481 kubelet[2564]: E0421 10:22:44.856456 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cd97785b7-rw8lm_calico-system(ee06722d-90c6-4155-8731-f921f1677bd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cd97785b7-rw8lm_calico-system(ee06722d-90c6-4155-8731-f921f1677bd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cd97785b7-rw8lm" podUID="ee06722d-90c6-4155-8731-f921f1677bd3" Apr 21 10:22:44.865376 containerd[1457]: time="2026-04-21T10:22:44.865337810Z" level=error msg="Failed to destroy network for sandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.866037 containerd[1457]: time="2026-04-21T10:22:44.865661935Z" level=error msg="encountered an error cleaning up failed sandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.866037 containerd[1457]: time="2026-04-21T10:22:44.865707625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c885d8ccf-njvhb,Uid:103eb8aa-b6c6-480d-a370-786769ae65a2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.866824 kubelet[2564]: E0421 10:22:44.866689 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.866824 kubelet[2564]: E0421 10:22:44.866731 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c885d8ccf-njvhb" Apr 21 10:22:44.866824 kubelet[2564]: E0421 10:22:44.866748 2564 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c885d8ccf-njvhb" Apr 21 10:22:44.867038 kubelet[2564]: E0421 10:22:44.866782 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c885d8ccf-njvhb_calico-system(103eb8aa-b6c6-480d-a370-786769ae65a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c885d8ccf-njvhb_calico-system(103eb8aa-b6c6-480d-a370-786769ae65a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c885d8ccf-njvhb" podUID="103eb8aa-b6c6-480d-a370-786769ae65a2" Apr 21 10:22:44.914174 containerd[1457]: time="2026-04-21T10:22:44.913658521Z" level=error msg="Failed to destroy network for sandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.914174 containerd[1457]: time="2026-04-21T10:22:44.914035967Z" level=error msg="encountered an error cleaning up failed sandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.914174 containerd[1457]: time="2026-04-21T10:22:44.914106798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-sp5tn,Uid:11bf43e9-9c35-41d9-833a-203a96bf4b43,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.914909 kubelet[2564]: E0421 10:22:44.914870 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.914959 kubelet[2564]: E0421 10:22:44.914924 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-sp5tn" Apr 21 10:22:44.914959 kubelet[2564]: E0421 10:22:44.914943 2564 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-sp5tn" Apr 21 10:22:44.915017 kubelet[2564]: E0421 10:22:44.914989 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-sp5tn_calico-system(11bf43e9-9c35-41d9-833a-203a96bf4b43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-sp5tn_calico-system(11bf43e9-9c35-41d9-833a-203a96bf4b43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-sp5tn" podUID="11bf43e9-9c35-41d9-833a-203a96bf4b43" Apr 21 10:22:44.920552 containerd[1457]: time="2026-04-21T10:22:44.920517457Z" level=error msg="Failed to destroy network for sandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.920906 containerd[1457]: time="2026-04-21T10:22:44.920878902Z" level=error msg="encountered an error cleaning up failed sandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.920948 containerd[1457]: time="2026-04-21T10:22:44.920934623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hmkdz,Uid:bfea7c85-36f8-4c6c-8806-1d5305a3e058,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.921594 kubelet[2564]: E0421 10:22:44.921110 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.921674 kubelet[2564]: E0421 10:22:44.921608 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hmkdz" Apr 21 10:22:44.921674 kubelet[2564]: E0421 10:22:44.921628 2564 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hmkdz" Apr 21 10:22:44.921758 kubelet[2564]: E0421 10:22:44.921678 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-hmkdz_kube-system(bfea7c85-36f8-4c6c-8806-1d5305a3e058)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-hmkdz_kube-system(bfea7c85-36f8-4c6c-8806-1d5305a3e058)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hmkdz" podUID="bfea7c85-36f8-4c6c-8806-1d5305a3e058" Apr 21 10:22:44.942194 systemd[1]: Started cri-containerd-7df4a98b04cc4a1266211aa8d0bb7da4f652a4c3b622a8f45874dc3b74eb71d3.scope - libcontainer container 7df4a98b04cc4a1266211aa8d0bb7da4f652a4c3b622a8f45874dc3b74eb71d3. Apr 21 10:22:44.960432 containerd[1457]: time="2026-04-21T10:22:44.960373310Z" level=error msg="Failed to destroy network for sandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.962302 containerd[1457]: time="2026-04-21T10:22:44.962219616Z" level=error msg="Failed to destroy network for sandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.965947 containerd[1457]: time="2026-04-21T10:22:44.965426991Z" level=error msg="encountered an error cleaning up failed sandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.965947 containerd[1457]: time="2026-04-21T10:22:44.965484491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c59f49bff-8gnf6,Uid:2a413923-7171-4cd9-86b6-66566674315f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.966098 kubelet[2564]: E0421 10:22:44.965845 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.966098 kubelet[2564]: E0421 10:22:44.965893 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c59f49bff-8gnf6" Apr 21 10:22:44.966098 kubelet[2564]: E0421 10:22:44.965912 2564 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c59f49bff-8gnf6" Apr 21 10:22:44.966302 kubelet[2564]: E0421 10:22:44.965962 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c59f49bff-8gnf6_calico-system(2a413923-7171-4cd9-86b6-66566674315f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c59f49bff-8gnf6_calico-system(2a413923-7171-4cd9-86b6-66566674315f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c59f49bff-8gnf6" podUID="2a413923-7171-4cd9-86b6-66566674315f" Apr 21 10:22:44.969828 containerd[1457]: time="2026-04-21T10:22:44.969534748Z" level=error msg="encountered an error cleaning up failed sandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.969828 containerd[1457]: time="2026-04-21T10:22:44.969689800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gk6tg,Uid:d4453451-982f-4725-af74-0e6e82cae9ec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.970160 kubelet[2564]: E0421 10:22:44.970125 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.970160 kubelet[2564]: E0421 10:22:44.970154 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gk6tg" Apr 21 10:22:44.970335 kubelet[2564]: E0421 10:22:44.970169 2564 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-gk6tg" Apr 21 10:22:44.970335 kubelet[2564]: E0421 10:22:44.970198 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-gk6tg_kube-system(d4453451-982f-4725-af74-0e6e82cae9ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-gk6tg_kube-system(d4453451-982f-4725-af74-0e6e82cae9ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-gk6tg" podUID="d4453451-982f-4725-af74-0e6e82cae9ec" Apr 21 10:22:44.985145 containerd[1457]: time="2026-04-21T10:22:44.985112944Z" level=error msg="Failed to destroy network for sandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.985485 containerd[1457]: time="2026-04-21T10:22:44.985454479Z" level=error msg="encountered an error cleaning up failed sandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.985514 containerd[1457]: time="2026-04-21T10:22:44.985497439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c885d8ccf-6qxjm,Uid:e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.985701 kubelet[2564]: E0421 10:22:44.985656 2564 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:44.985768 kubelet[2564]: E0421 10:22:44.985714 2564 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c885d8ccf-6qxjm" Apr 21 10:22:44.985768 kubelet[2564]: E0421 10:22:44.985732 2564 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7c885d8ccf-6qxjm" Apr 21 10:22:44.985819 kubelet[2564]: E0421 10:22:44.985772 2564 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c885d8ccf-6qxjm_calico-system(e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c885d8ccf-6qxjm_calico-system(e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7c885d8ccf-6qxjm" podUID="e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0" Apr 21 10:22:44.998674 containerd[1457]: time="2026-04-21T10:22:44.998635302Z" level=info msg="StartContainer for \"7df4a98b04cc4a1266211aa8d0bb7da4f652a4c3b622a8f45874dc3b74eb71d3\" returns successfully" Apr 21 10:22:45.584741 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be-shm.mount: Deactivated successfully. Apr 21 10:22:45.584852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a-shm.mount: Deactivated successfully. Apr 21 10:22:45.584925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0-shm.mount: Deactivated successfully. Apr 21 10:22:45.688515 systemd[1]: Created slice kubepods-besteffort-pod0efc2a75_1741_47b4_a6a6_a697ca685699.slice - libcontainer container kubepods-besteffort-pod0efc2a75_1741_47b4_a6a6_a697ca685699.slice. Apr 21 10:22:45.693297 containerd[1457]: time="2026-04-21T10:22:45.693253290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84zl9,Uid:0efc2a75-1741-47b4-a6a6-a697ca685699,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:45.786919 kubelet[2564]: I0421 10:22:45.786879 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:22:45.790550 containerd[1457]: time="2026-04-21T10:22:45.790300024Z" level=info msg="StopPodSandbox for \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\"" Apr 21 10:22:45.792024 containerd[1457]: time="2026-04-21T10:22:45.791988016Z" level=info msg="Ensure that sandbox d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e in task-service has been cleanup successfully" Apr 21 10:22:45.792738 kubelet[2564]: I0421 10:22:45.792384 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:22:45.793879 containerd[1457]: time="2026-04-21T10:22:45.793849661Z" level=info msg="StopPodSandbox for \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\"" Apr 21 10:22:45.794305 containerd[1457]: time="2026-04-21T10:22:45.794275706Z" level=info msg="Ensure that sandbox 1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a in task-service has been cleanup successfully" Apr 21 10:22:45.800566 kubelet[2564]: I0421 10:22:45.800526 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:22:45.802164 containerd[1457]: time="2026-04-21T10:22:45.802082128Z" level=info msg="StopPodSandbox for \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\"" Apr 21 10:22:45.802694 containerd[1457]: time="2026-04-21T10:22:45.802638355Z" level=info msg="Ensure that sandbox a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0 in task-service has been cleanup successfully" Apr 21 10:22:45.830496 kubelet[2564]: I0421 10:22:45.830081 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:22:45.833356 containerd[1457]: time="2026-04-21T10:22:45.833311714Z" level=info msg="StopPodSandbox for \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\"" Apr 21 10:22:45.833558 containerd[1457]: time="2026-04-21T10:22:45.833519637Z" level=info msg="Ensure that sandbox 8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc in task-service has been cleanup successfully" Apr 21 10:22:45.836731 kubelet[2564]: I0421 10:22:45.836641 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:22:45.837736 containerd[1457]: time="2026-04-21T10:22:45.837476079Z" level=info msg="StopPodSandbox for \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\"" Apr 21 10:22:45.837891 containerd[1457]: time="2026-04-21T10:22:45.837811753Z" level=info msg="Ensure that sandbox 993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae in task-service has been cleanup successfully" Apr 21 10:22:45.844396 kubelet[2564]: I0421 10:22:45.843698 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:22:45.845408 containerd[1457]: time="2026-04-21T10:22:45.845353451Z" level=info msg="StopPodSandbox for \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\"" Apr 21 10:22:45.848239 kubelet[2564]: I0421 10:22:45.847198 2564 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:22:45.849094 containerd[1457]: time="2026-04-21T10:22:45.848415771Z" level=info msg="StopPodSandbox for \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\"" Apr 21 10:22:45.849094 containerd[1457]: time="2026-04-21T10:22:45.848807946Z" level=info msg="Ensure that sandbox dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be in task-service has been cleanup successfully" Apr 21 10:22:45.860737 containerd[1457]: time="2026-04-21T10:22:45.860687281Z" level=info msg="Ensure that sandbox d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da in task-service has been cleanup successfully" Apr 21 10:22:46.091155 kubelet[2564]: I0421 10:22:46.090258 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5cd7d" podStartSLOduration=3.387445253 podStartE2EDuration="13.090243218s" podCreationTimestamp="2026-04-21 10:22:33 +0000 UTC" firstStartedPulling="2026-04-21 10:22:33.865516037 +0000 UTC m=+16.297661780" lastFinishedPulling="2026-04-21 10:22:43.568314002 +0000 UTC m=+26.000459745" observedRunningTime="2026-04-21 10:22:45.916042562 +0000 UTC m=+28.348188305" watchObservedRunningTime="2026-04-21 10:22:46.090243218 +0000 UTC m=+28.522388961" Apr 21 10:22:46.144139 systemd-networkd[1382]: cali48dd5b50cb4: Link UP Apr 21 10:22:46.145188 systemd-networkd[1382]: cali48dd5b50cb4: Gained carrier Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.729 [ERROR][3661] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.758 [INFO][3661] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--196--117-k8s-csi--node--driver--84zl9-eth0 csi-node-driver- calico-system 0efc2a75-1741-47b4-a6a6-a697ca685699 743 0 2026-04-21 10:22:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-196-117 csi-node-driver-84zl9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali48dd5b50cb4 [] [] }} ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Namespace="calico-system" Pod="csi-node-driver-84zl9" WorkloadEndpoint="172--234--196--117-k8s-csi--node--driver--84zl9-" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.759 [INFO][3661] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Namespace="calico-system" Pod="csi-node-driver-84zl9" WorkloadEndpoint="172--234--196--117-k8s-csi--node--driver--84zl9-eth0" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.818 [INFO][3674] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" HandleID="k8s-pod-network.80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Workload="172--234--196--117-k8s-csi--node--driver--84zl9-eth0" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.857 [INFO][3674] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" HandleID="k8s-pod-network.80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Workload="172--234--196--117-k8s-csi--node--driver--84zl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-196-117", "pod":"csi-node-driver-84zl9", "timestamp":"2026-04-21 10:22:45.818765725 +0000 UTC"}, Hostname:"172-234-196-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001142c0)} Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.857 [INFO][3674] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.857 [INFO][3674] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.857 [INFO][3674] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-196-117' Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.872 [INFO][3674] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" host="172-234-196-117" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.914 [INFO][3674] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-196-117" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.950 [INFO][3674] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.966 [INFO][3674] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.987 [INFO][3674] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:45.987 [INFO][3674] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" host="172-234-196-117" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:46.003 [INFO][3674] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353 Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:46.027 [INFO][3674] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" host="172-234-196-117" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:46.056 [INFO][3674] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.129/26] block=192.168.71.128/26 handle="k8s-pod-network.80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" host="172-234-196-117" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:46.057 [INFO][3674] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.129/26] handle="k8s-pod-network.80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" host="172-234-196-117" Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:46.058 [INFO][3674] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:46.240080 containerd[1457]: 2026-04-21 10:22:46.060 [INFO][3674] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.129/26] IPv6=[] ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" HandleID="k8s-pod-network.80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Workload="172--234--196--117-k8s-csi--node--driver--84zl9-eth0" Apr 21 10:22:46.240601 containerd[1457]: 2026-04-21 10:22:46.119 [INFO][3661] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Namespace="calico-system" Pod="csi-node-driver-84zl9" WorkloadEndpoint="172--234--196--117-k8s-csi--node--driver--84zl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-csi--node--driver--84zl9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0efc2a75-1741-47b4-a6a6-a697ca685699", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"", Pod:"csi-node-driver-84zl9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48dd5b50cb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:46.240601 containerd[1457]: 2026-04-21 10:22:46.119 [INFO][3661] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.129/32] ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Namespace="calico-system" Pod="csi-node-driver-84zl9" WorkloadEndpoint="172--234--196--117-k8s-csi--node--driver--84zl9-eth0" Apr 21 10:22:46.240601 containerd[1457]: 2026-04-21 10:22:46.119 [INFO][3661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48dd5b50cb4 ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Namespace="calico-system" Pod="csi-node-driver-84zl9" WorkloadEndpoint="172--234--196--117-k8s-csi--node--driver--84zl9-eth0" Apr 21 10:22:46.240601 containerd[1457]: 2026-04-21 10:22:46.154 [INFO][3661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Namespace="calico-system" Pod="csi-node-driver-84zl9" WorkloadEndpoint="172--234--196--117-k8s-csi--node--driver--84zl9-eth0" Apr 21 10:22:46.240601 containerd[1457]: 2026-04-21 10:22:46.160 [INFO][3661] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Namespace="calico-system" Pod="csi-node-driver-84zl9" WorkloadEndpoint="172--234--196--117-k8s-csi--node--driver--84zl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-csi--node--driver--84zl9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0efc2a75-1741-47b4-a6a6-a697ca685699", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353", Pod:"csi-node-driver-84zl9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.71.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48dd5b50cb4", MAC:"3e:76:6e:65:9a:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:46.240601 containerd[1457]: 2026-04-21 10:22:46.223 [INFO][3661] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353" Namespace="calico-system" Pod="csi-node-driver-84zl9" WorkloadEndpoint="172--234--196--117-k8s-csi--node--driver--84zl9-eth0" Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.091 [INFO][3708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.091 [INFO][3708] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" iface="eth0" netns="/var/run/netns/cni-81ffe68a-eae8-0ece-7f64-795d206a69df" Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.092 [INFO][3708] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" iface="eth0" netns="/var/run/netns/cni-81ffe68a-eae8-0ece-7f64-795d206a69df" Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.097 [INFO][3708] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" iface="eth0" netns="/var/run/netns/cni-81ffe68a-eae8-0ece-7f64-795d206a69df" Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.097 [INFO][3708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.097 [INFO][3708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.283 [INFO][3782] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" HandleID="k8s-pod-network.d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.283 [INFO][3782] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.283 [INFO][3782] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.301 [WARNING][3782] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" HandleID="k8s-pod-network.d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.301 [INFO][3782] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" HandleID="k8s-pod-network.d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.304 [INFO][3782] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:46.315095 containerd[1457]: 2026-04-21 10:22:46.309 [INFO][3708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:22:46.320291 containerd[1457]: time="2026-04-21T10:22:46.320190505Z" level=info msg="TearDown network for sandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\" successfully" Apr 21 10:22:46.320291 containerd[1457]: time="2026-04-21T10:22:46.320229236Z" level=info msg="StopPodSandbox for \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\" returns successfully" Apr 21 10:22:46.322204 systemd[1]: run-netns-cni\x2d81ffe68a\x2deae8\x2d0ece\x2d7f64\x2d795d206a69df.mount: Deactivated successfully. Apr 21 10:22:46.326242 containerd[1457]: time="2026-04-21T10:22:46.326205819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c59f49bff-8gnf6,Uid:2a413923-7171-4cd9-86b6-66566674315f,Namespace:calico-system,Attempt:1,}" Apr 21 10:22:46.384359 containerd[1457]: time="2026-04-21T10:22:46.382508036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:46.384359 containerd[1457]: time="2026-04-21T10:22:46.382564567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:46.384359 containerd[1457]: time="2026-04-21T10:22:46.382578247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:46.384359 containerd[1457]: time="2026-04-21T10:22:46.382659448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.178 [INFO][3759] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.183 [INFO][3759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" iface="eth0" netns="/var/run/netns/cni-7b720fc7-00ac-6502-b654-35b91440069c" Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.186 [INFO][3759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" iface="eth0" netns="/var/run/netns/cni-7b720fc7-00ac-6502-b654-35b91440069c" Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.191 [INFO][3759] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" iface="eth0" netns="/var/run/netns/cni-7b720fc7-00ac-6502-b654-35b91440069c" Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.191 [INFO][3759] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.191 [INFO][3759] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.347 [INFO][3808] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" HandleID="k8s-pod-network.993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.349 [INFO][3808] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.349 [INFO][3808] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.382 [WARNING][3808] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" HandleID="k8s-pod-network.993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.382 [INFO][3808] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" HandleID="k8s-pod-network.993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.391 [INFO][3808] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:46.421757 containerd[1457]: 2026-04-21 10:22:46.410 [INFO][3759] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:22:46.425837 containerd[1457]: time="2026-04-21T10:22:46.425565842Z" level=info msg="TearDown network for sandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\" successfully" Apr 21 10:22:46.425837 containerd[1457]: time="2026-04-21T10:22:46.425610602Z" level=info msg="StopPodSandbox for \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\" returns successfully" Apr 21 10:22:46.445251 containerd[1457]: time="2026-04-21T10:22:46.445193442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c885d8ccf-6qxjm,Uid:e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0,Namespace:calico-system,Attempt:1,}" Apr 21 10:22:46.456459 systemd[1]: Started cri-containerd-80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353.scope - libcontainer container 80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353. Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.137 [INFO][3719] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.137 [INFO][3719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" iface="eth0" netns="/var/run/netns/cni-05b0a0e4-02e0-feb7-1b82-e82ec147eb6d" Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.137 [INFO][3719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" iface="eth0" netns="/var/run/netns/cni-05b0a0e4-02e0-feb7-1b82-e82ec147eb6d" Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.139 [INFO][3719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" iface="eth0" netns="/var/run/netns/cni-05b0a0e4-02e0-feb7-1b82-e82ec147eb6d" Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.139 [INFO][3719] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.139 [INFO][3719] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.357 [INFO][3797] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" HandleID="k8s-pod-network.a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Workload="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.357 [INFO][3797] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.406 [INFO][3797] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.422 [WARNING][3797] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" HandleID="k8s-pod-network.a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Workload="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.422 [INFO][3797] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" HandleID="k8s-pod-network.a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Workload="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.428 [INFO][3797] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:46.477654 containerd[1457]: 2026-04-21 10:22:46.464 [INFO][3719] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:22:46.481621 containerd[1457]: time="2026-04-21T10:22:46.478222305Z" level=info msg="TearDown network for sandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\" successfully" Apr 21 10:22:46.481621 containerd[1457]: time="2026-04-21T10:22:46.478269075Z" level=info msg="StopPodSandbox for \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\" returns successfully" Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.097 [INFO][3709] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.097 [INFO][3709] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" iface="eth0" netns="/var/run/netns/cni-61dacfba-3d94-d8c0-3cb4-378083fb45b7" Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.097 [INFO][3709] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" iface="eth0" netns="/var/run/netns/cni-61dacfba-3d94-d8c0-3cb4-378083fb45b7" Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.102 [INFO][3709] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" iface="eth0" netns="/var/run/netns/cni-61dacfba-3d94-d8c0-3cb4-378083fb45b7" Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.102 [INFO][3709] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.102 [INFO][3709] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.378 [INFO][3789] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" HandleID="k8s-pod-network.1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.379 [INFO][3789] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.428 [INFO][3789] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.460 [WARNING][3789] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" HandleID="k8s-pod-network.1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.461 [INFO][3789] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" HandleID="k8s-pod-network.1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.464 [INFO][3789] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:46.493964 containerd[1457]: 2026-04-21 10:22:46.487 [INFO][3709] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:22:46.497804 containerd[1457]: time="2026-04-21T10:22:46.497124246Z" level=info msg="TearDown network for sandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\" successfully" Apr 21 10:22:46.497804 containerd[1457]: time="2026-04-21T10:22:46.497154516Z" level=info msg="StopPodSandbox for \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\" returns successfully" Apr 21 10:22:46.502488 kubelet[2564]: E0421 10:22:46.501671 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:46.504440 containerd[1457]: time="2026-04-21T10:22:46.504285233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hmkdz,Uid:bfea7c85-36f8-4c6c-8806-1d5305a3e058,Namespace:kube-system,Attempt:1,}" Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.243 [INFO][3753] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.243 [INFO][3753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" iface="eth0" netns="/var/run/netns/cni-9bfbd1d6-67bf-35c1-d36a-1f145af83a7a" Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.243 [INFO][3753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" iface="eth0" netns="/var/run/netns/cni-9bfbd1d6-67bf-35c1-d36a-1f145af83a7a" Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.248 [INFO][3753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" iface="eth0" netns="/var/run/netns/cni-9bfbd1d6-67bf-35c1-d36a-1f145af83a7a" Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.248 [INFO][3753] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.248 [INFO][3753] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.383 [INFO][3814] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" HandleID="k8s-pod-network.dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.384 [INFO][3814] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.465 [INFO][3814] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.503 [WARNING][3814] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" HandleID="k8s-pod-network.dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.503 [INFO][3814] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" HandleID="k8s-pod-network.dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.512 [INFO][3814] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:46.534248 containerd[1457]: 2026-04-21 10:22:46.527 [INFO][3753] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:22:46.535434 containerd[1457]: time="2026-04-21T10:22:46.535407813Z" level=info msg="TearDown network for sandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\" successfully" Apr 21 10:22:46.536036 containerd[1457]: time="2026-04-21T10:22:46.535482054Z" level=info msg="StopPodSandbox for \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\" returns successfully" Apr 21 10:22:46.542709 containerd[1457]: time="2026-04-21T10:22:46.542573760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c885d8ccf-njvhb,Uid:103eb8aa-b6c6-480d-a370-786769ae65a2,Namespace:calico-system,Attempt:1,}" Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.250 [INFO][3746] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.257 [INFO][3746] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" iface="eth0" netns="/var/run/netns/cni-87ee5c1c-fcd4-9164-273d-f063efaa4f40" Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.258 [INFO][3746] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" iface="eth0" netns="/var/run/netns/cni-87ee5c1c-fcd4-9164-273d-f063efaa4f40" Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.258 [INFO][3746] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" iface="eth0" netns="/var/run/netns/cni-87ee5c1c-fcd4-9164-273d-f063efaa4f40" Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.258 [INFO][3746] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.258 [INFO][3746] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.453 [INFO][3819] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" HandleID="k8s-pod-network.8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.462 [INFO][3819] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.519 [INFO][3819] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.554 [WARNING][3819] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" HandleID="k8s-pod-network.8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.554 [INFO][3819] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" HandleID="k8s-pod-network.8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.557 [INFO][3819] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:46.578397 containerd[1457]: 2026-04-21 10:22:46.560 [INFO][3746] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:22:46.579341 kubelet[2564]: I0421 10:22:46.578972 2564 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee06722d-90c6-4155-8731-f921f1677bd3-whisker-backend-key-pair\") pod \"ee06722d-90c6-4155-8731-f921f1677bd3\" (UID: \"ee06722d-90c6-4155-8731-f921f1677bd3\") " Apr 21 10:22:46.579341 kubelet[2564]: I0421 10:22:46.579021 2564 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjg8w\" (UniqueName: \"kubernetes.io/projected/ee06722d-90c6-4155-8731-f921f1677bd3-kube-api-access-qjg8w\") pod \"ee06722d-90c6-4155-8731-f921f1677bd3\" (UID: \"ee06722d-90c6-4155-8731-f921f1677bd3\") " Apr 21 10:22:46.579341 kubelet[2564]: I0421 10:22:46.579038 2564 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ee06722d-90c6-4155-8731-f921f1677bd3-nginx-config\") pod \"ee06722d-90c6-4155-8731-f921f1677bd3\" (UID: \"ee06722d-90c6-4155-8731-f921f1677bd3\") " Apr 21 10:22:46.579341 kubelet[2564]: I0421 10:22:46.579108 2564 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee06722d-90c6-4155-8731-f921f1677bd3-whisker-ca-bundle\") pod \"ee06722d-90c6-4155-8731-f921f1677bd3\" (UID: \"ee06722d-90c6-4155-8731-f921f1677bd3\") " Apr 21 10:22:46.579767 containerd[1457]: time="2026-04-21T10:22:46.579623653Z" level=info msg="TearDown network for sandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\" successfully" Apr 21 10:22:46.579767 containerd[1457]: time="2026-04-21T10:22:46.579662573Z" level=info msg="StopPodSandbox for \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\" returns successfully" Apr 21 10:22:46.581587 kubelet[2564]: I0421 10:22:46.581151 2564 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee06722d-90c6-4155-8731-f921f1677bd3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ee06722d-90c6-4155-8731-f921f1677bd3" (UID: "ee06722d-90c6-4155-8731-f921f1677bd3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:22:46.581587 kubelet[2564]: I0421 10:22:46.581432 2564 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee06722d-90c6-4155-8731-f921f1677bd3-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "ee06722d-90c6-4155-8731-f921f1677bd3" (UID: "ee06722d-90c6-4155-8731-f921f1677bd3"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:22:46.584286 kubelet[2564]: E0421 10:22:46.584247 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:46.586852 kubelet[2564]: I0421 10:22:46.586809 2564 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee06722d-90c6-4155-8731-f921f1677bd3-kube-api-access-qjg8w" (OuterVolumeSpecName: "kube-api-access-qjg8w") pod "ee06722d-90c6-4155-8731-f921f1677bd3" (UID: "ee06722d-90c6-4155-8731-f921f1677bd3"). InnerVolumeSpecName "kube-api-access-qjg8w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:22:46.588826 kubelet[2564]: I0421 10:22:46.588785 2564 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee06722d-90c6-4155-8731-f921f1677bd3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ee06722d-90c6-4155-8731-f921f1677bd3" (UID: "ee06722d-90c6-4155-8731-f921f1677bd3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:22:46.589576 containerd[1457]: time="2026-04-21T10:22:46.589552904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gk6tg,Uid:d4453451-982f-4725-af74-0e6e82cae9ec,Namespace:kube-system,Attempt:1,}" Apr 21 10:22:46.594935 systemd[1]: run-netns-cni\x2d7b720fc7\x2d00ac\x2d6502\x2db654\x2d35b91440069c.mount: Deactivated successfully. Apr 21 10:22:46.597195 systemd[1]: run-netns-cni\x2d9bfbd1d6\x2d67bf\x2d35c1\x2dd36a\x2d1f145af83a7a.mount: Deactivated successfully. Apr 21 10:22:46.597275 systemd[1]: run-netns-cni\x2d61dacfba\x2d3d94\x2dd8c0\x2d3cb4\x2d378083fb45b7.mount: Deactivated successfully. Apr 21 10:22:46.597344 systemd[1]: run-netns-cni\x2d05b0a0e4\x2d02e0\x2dfeb7\x2d1b82\x2de82ec147eb6d.mount: Deactivated successfully. Apr 21 10:22:46.612721 systemd[1]: run-netns-cni\x2d87ee5c1c\x2dfcd4\x2d9164\x2d273d\x2df063efaa4f40.mount: Deactivated successfully. Apr 21 10:22:46.612864 systemd[1]: var-lib-kubelet-pods-ee06722d\x2d90c6\x2d4155\x2d8731\x2df921f1677bd3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqjg8w.mount: Deactivated successfully. Apr 21 10:22:46.612975 systemd[1]: var-lib-kubelet-pods-ee06722d\x2d90c6\x2d4155\x2d8731\x2df921f1677bd3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.289 [INFO][3763] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.289 [INFO][3763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" iface="eth0" netns="/var/run/netns/cni-802ea001-821b-ada9-6f88-139b2720141b" Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.292 [INFO][3763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" iface="eth0" netns="/var/run/netns/cni-802ea001-821b-ada9-6f88-139b2720141b" Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.294 [INFO][3763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" iface="eth0" netns="/var/run/netns/cni-802ea001-821b-ada9-6f88-139b2720141b" Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.294 [INFO][3763] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.294 [INFO][3763] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.505 [INFO][3830] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" HandleID="k8s-pod-network.d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.505 [INFO][3830] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.559 [INFO][3830] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.612 [WARNING][3830] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" HandleID="k8s-pod-network.d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.612 [INFO][3830] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" HandleID="k8s-pod-network.d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.620 [INFO][3830] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:46.658306 containerd[1457]: 2026-04-21 10:22:46.636 [INFO][3763] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:22:46.660234 containerd[1457]: time="2026-04-21T10:22:46.659298956Z" level=info msg="TearDown network for sandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\" successfully" Apr 21 10:22:46.660234 containerd[1457]: time="2026-04-21T10:22:46.659352356Z" level=info msg="StopPodSandbox for \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\" returns successfully" Apr 21 10:22:46.661952 containerd[1457]: time="2026-04-21T10:22:46.661606924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-sp5tn,Uid:11bf43e9-9c35-41d9-833a-203a96bf4b43,Namespace:calico-system,Attempt:1,}" Apr 21 10:22:46.667029 containerd[1457]: time="2026-04-21T10:22:46.667000770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84zl9,Uid:0efc2a75-1741-47b4-a6a6-a697ca685699,Namespace:calico-system,Attempt:0,} returns sandbox id \"80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353\"" Apr 21 10:22:46.670853 systemd[1]: run-netns-cni\x2d802ea001\x2d821b\x2dada9\x2d6f88\x2d139b2720141b.mount: Deactivated successfully. Apr 21 10:22:46.673640 containerd[1457]: time="2026-04-21T10:22:46.673382507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:22:46.680549 kubelet[2564]: I0421 10:22:46.680515 2564 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qjg8w\" (UniqueName: \"kubernetes.io/projected/ee06722d-90c6-4155-8731-f921f1677bd3-kube-api-access-qjg8w\") on node \"172-234-196-117\" DevicePath \"\"" Apr 21 10:22:46.680549 kubelet[2564]: I0421 10:22:46.680543 2564 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ee06722d-90c6-4155-8731-f921f1677bd3-nginx-config\") on node \"172-234-196-117\" DevicePath \"\"" Apr 21 10:22:46.681342 kubelet[2564]: I0421 10:22:46.680555 2564 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee06722d-90c6-4155-8731-f921f1677bd3-whisker-ca-bundle\") on node \"172-234-196-117\" DevicePath \"\"" Apr 21 10:22:46.681342 kubelet[2564]: I0421 10:22:46.680564 2564 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee06722d-90c6-4155-8731-f921f1677bd3-whisker-backend-key-pair\") on node \"172-234-196-117\" DevicePath \"\"" Apr 21 10:22:46.880395 kubelet[2564]: I0421 10:22:46.880341 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:46.904863 systemd[1]: Removed slice kubepods-besteffort-podee06722d_90c6_4155_8731_f921f1677bd3.slice - libcontainer container kubepods-besteffort-podee06722d_90c6_4155_8731_f921f1677bd3.slice. Apr 21 10:22:47.025537 systemd-networkd[1382]: cali18244db7ee1: Link UP Apr 21 10:22:47.026871 systemd-networkd[1382]: cali18244db7ee1: Gained carrier Apr 21 10:22:47.097555 systemd[1]: Created slice kubepods-besteffort-pod67e67654_5dde_42ee_bb8f_153e15e8d632.slice - libcontainer container kubepods-besteffort-pod67e67654_5dde_42ee_bb8f_153e15e8d632.slice. Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.516 [ERROR][3873] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.554 [INFO][3873] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0 calico-kube-controllers-5c59f49bff- calico-system 2a413923-7171-4cd9-86b6-66566674315f 932 0 2026-04-21 10:22:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c59f49bff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-196-117 calico-kube-controllers-5c59f49bff-8gnf6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali18244db7ee1 [] [] }} ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Namespace="calico-system" Pod="calico-kube-controllers-5c59f49bff-8gnf6" WorkloadEndpoint="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.557 [INFO][3873] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Namespace="calico-system" Pod="calico-kube-controllers-5c59f49bff-8gnf6" WorkloadEndpoint="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.759 [INFO][3952] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" HandleID="k8s-pod-network.939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.781 [INFO][3952] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" HandleID="k8s-pod-network.939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-196-117", "pod":"calico-kube-controllers-5c59f49bff-8gnf6", "timestamp":"2026-04-21 10:22:46.759872853 +0000 UTC"}, Hostname:"172-234-196-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.781 [INFO][3952] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.781 [INFO][3952] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.781 [INFO][3952] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-196-117' Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.832 [INFO][3952] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" host="172-234-196-117" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.846 [INFO][3952] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-196-117" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.868 [INFO][3952] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.876 [INFO][3952] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.893 [INFO][3952] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.899 [INFO][3952] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" host="172-234-196-117" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.926 [INFO][3952] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8 Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.936 [INFO][3952] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" host="172-234-196-117" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.977 [INFO][3952] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.130/26] block=192.168.71.128/26 handle="k8s-pod-network.939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" host="172-234-196-117" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.977 [INFO][3952] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.130/26] handle="k8s-pod-network.939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" host="172-234-196-117" Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.977 [INFO][3952] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:47.124862 containerd[1457]: 2026-04-21 10:22:46.977 [INFO][3952] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.130/26] IPv6=[] ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" HandleID="k8s-pod-network.939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:22:47.129493 containerd[1457]: 2026-04-21 10:22:47.018 [INFO][3873] cni-plugin/k8s.go 418: Populated endpoint ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Namespace="calico-system" Pod="calico-kube-controllers-5c59f49bff-8gnf6" WorkloadEndpoint="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0", GenerateName:"calico-kube-controllers-5c59f49bff-", Namespace:"calico-system", SelfLink:"", UID:"2a413923-7171-4cd9-86b6-66566674315f", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c59f49bff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"", Pod:"calico-kube-controllers-5c59f49bff-8gnf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18244db7ee1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.129493 containerd[1457]: 2026-04-21 10:22:47.019 [INFO][3873] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.130/32] ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Namespace="calico-system" Pod="calico-kube-controllers-5c59f49bff-8gnf6" WorkloadEndpoint="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:22:47.129493 containerd[1457]: 2026-04-21 10:22:47.019 [INFO][3873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18244db7ee1 ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Namespace="calico-system" Pod="calico-kube-controllers-5c59f49bff-8gnf6" WorkloadEndpoint="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:22:47.129493 containerd[1457]: 2026-04-21 10:22:47.028 [INFO][3873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Namespace="calico-system" Pod="calico-kube-controllers-5c59f49bff-8gnf6" WorkloadEndpoint="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:22:47.129493 containerd[1457]: 2026-04-21 10:22:47.038 [INFO][3873] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Namespace="calico-system" Pod="calico-kube-controllers-5c59f49bff-8gnf6" WorkloadEndpoint="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0", GenerateName:"calico-kube-controllers-5c59f49bff-", Namespace:"calico-system", SelfLink:"", UID:"2a413923-7171-4cd9-86b6-66566674315f", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c59f49bff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8", Pod:"calico-kube-controllers-5c59f49bff-8gnf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18244db7ee1", MAC:"96:69:e2:0b:d9:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.129493 containerd[1457]: 2026-04-21 10:22:47.078 [INFO][3873] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8" Namespace="calico-system" Pod="calico-kube-controllers-5c59f49bff-8gnf6" WorkloadEndpoint="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:22:47.186730 systemd-networkd[1382]: cali1f7288f1c56: Link UP Apr 21 10:22:47.190715 kubelet[2564]: I0421 10:22:47.188722 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75lsw\" (UniqueName: \"kubernetes.io/projected/67e67654-5dde-42ee-bb8f-153e15e8d632-kube-api-access-75lsw\") pod \"whisker-78c59b47f6-2xpc9\" (UID: \"67e67654-5dde-42ee-bb8f-153e15e8d632\") " pod="calico-system/whisker-78c59b47f6-2xpc9" Apr 21 10:22:47.190715 kubelet[2564]: I0421 10:22:47.188763 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/67e67654-5dde-42ee-bb8f-153e15e8d632-whisker-backend-key-pair\") pod \"whisker-78c59b47f6-2xpc9\" (UID: \"67e67654-5dde-42ee-bb8f-153e15e8d632\") " pod="calico-system/whisker-78c59b47f6-2xpc9" Apr 21 10:22:47.190715 kubelet[2564]: I0421 10:22:47.188785 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/67e67654-5dde-42ee-bb8f-153e15e8d632-nginx-config\") pod \"whisker-78c59b47f6-2xpc9\" (UID: \"67e67654-5dde-42ee-bb8f-153e15e8d632\") " pod="calico-system/whisker-78c59b47f6-2xpc9" Apr 21 10:22:47.190715 kubelet[2564]: I0421 10:22:47.188806 2564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67e67654-5dde-42ee-bb8f-153e15e8d632-whisker-ca-bundle\") pod \"whisker-78c59b47f6-2xpc9\" (UID: \"67e67654-5dde-42ee-bb8f-153e15e8d632\") " pod="calico-system/whisker-78c59b47f6-2xpc9" Apr 21 10:22:47.192855 systemd-networkd[1382]: cali1f7288f1c56: Gained carrier Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:46.630 [ERROR][3920] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:46.694 [INFO][3920] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0 calico-apiserver-7c885d8ccf- calico-system e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0 935 0 2026-04-21 10:22:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c885d8ccf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-196-117 calico-apiserver-7c885d8ccf-6qxjm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1f7288f1c56 [] [] }} ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-6qxjm" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:46.694 [INFO][3920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-6qxjm" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:46.838 [INFO][4019] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" HandleID="k8s-pod-network.524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:46.869 [INFO][4019] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" HandleID="k8s-pod-network.524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000353c80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-196-117", "pod":"calico-apiserver-7c885d8ccf-6qxjm", "timestamp":"2026-04-21 10:22:46.838410882 +0000 UTC"}, Hostname:"172-234-196-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004dd4a0)} Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:46.869 [INFO][4019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:46.981 [INFO][4019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:46.981 [INFO][4019] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-196-117' Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:46.999 [INFO][4019] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" host="172-234-196-117" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.085 [INFO][4019] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-196-117" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.130 [INFO][4019] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.135 [INFO][4019] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.137 [INFO][4019] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.137 [INFO][4019] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" host="172-234-196-117" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.140 [INFO][4019] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.147 [INFO][4019] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" host="172-234-196-117" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.158 [INFO][4019] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.131/26] block=192.168.71.128/26 handle="k8s-pod-network.524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" host="172-234-196-117" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.159 [INFO][4019] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.131/26] handle="k8s-pod-network.524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" host="172-234-196-117" Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.159 [INFO][4019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:47.214940 containerd[1457]: 2026-04-21 10:22:47.159 [INFO][4019] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.131/26] IPv6=[] ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" HandleID="k8s-pod-network.524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:22:47.215503 containerd[1457]: 2026-04-21 10:22:47.166 [INFO][3920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-6qxjm" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0", GenerateName:"calico-apiserver-7c885d8ccf-", Namespace:"calico-system", SelfLink:"", UID:"e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c885d8ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"", Pod:"calico-apiserver-7c885d8ccf-6qxjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1f7288f1c56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.215503 containerd[1457]: 2026-04-21 10:22:47.166 [INFO][3920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.131/32] ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-6qxjm" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:22:47.215503 containerd[1457]: 2026-04-21 10:22:47.166 [INFO][3920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f7288f1c56 ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-6qxjm" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:22:47.215503 containerd[1457]: 2026-04-21 10:22:47.195 [INFO][3920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-6qxjm" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:22:47.215503 containerd[1457]: 2026-04-21 10:22:47.197 [INFO][3920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-6qxjm" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0", GenerateName:"calico-apiserver-7c885d8ccf-", Namespace:"calico-system", SelfLink:"", UID:"e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c885d8ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e", Pod:"calico-apiserver-7c885d8ccf-6qxjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1f7288f1c56", MAC:"32:6e:0c:30:f8:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.215503 containerd[1457]: 2026-04-21 10:22:47.206 [INFO][3920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-6qxjm" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:22:47.236255 containerd[1457]: time="2026-04-21T10:22:47.235341890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:47.236255 containerd[1457]: time="2026-04-21T10:22:47.235393490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:47.236255 containerd[1457]: time="2026-04-21T10:22:47.235421271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:47.236255 containerd[1457]: time="2026-04-21T10:22:47.235536712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:47.263635 containerd[1457]: time="2026-04-21T10:22:47.263424621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:47.263635 containerd[1457]: time="2026-04-21T10:22:47.263480112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:47.263635 containerd[1457]: time="2026-04-21T10:22:47.263493512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:47.263635 containerd[1457]: time="2026-04-21T10:22:47.263572363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:47.273406 systemd-networkd[1382]: cali1ed7f3576a6: Link UP Apr 21 10:22:47.273647 systemd-networkd[1382]: cali1ed7f3576a6: Gained carrier Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:46.830 [ERROR][4018] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:46.888 [INFO][4018] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0 goldmane-cccfbd5cf- calico-system 11bf43e9-9c35-41d9-833a-203a96bf4b43 940 0 2026-04-21 10:22:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-234-196-117 goldmane-cccfbd5cf-sp5tn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1ed7f3576a6 [] [] }} ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Namespace="calico-system" Pod="goldmane-cccfbd5cf-sp5tn" WorkloadEndpoint="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:46.888 [INFO][4018] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Namespace="calico-system" Pod="goldmane-cccfbd5cf-sp5tn" WorkloadEndpoint="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.143 [INFO][4068] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" HandleID="k8s-pod-network.033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.171 [INFO][4068] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" HandleID="k8s-pod-network.033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a12c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-196-117", "pod":"goldmane-cccfbd5cf-sp5tn", "timestamp":"2026-04-21 10:22:47.143727041 +0000 UTC"}, Hostname:"172-234-196-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000544420)} Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.172 [INFO][4068] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.172 [INFO][4068] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.172 [INFO][4068] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-196-117' Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.186 [INFO][4068] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" host="172-234-196-117" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.195 [INFO][4068] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-196-117" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.219 [INFO][4068] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.222 [INFO][4068] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.226 [INFO][4068] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.226 [INFO][4068] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" host="172-234-196-117" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.231 [INFO][4068] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677 Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.239 [INFO][4068] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" host="172-234-196-117" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.247 [INFO][4068] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.132/26] block=192.168.71.128/26 handle="k8s-pod-network.033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" host="172-234-196-117" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.247 [INFO][4068] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.132/26] handle="k8s-pod-network.033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" host="172-234-196-117" Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.247 [INFO][4068] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:47.305139 containerd[1457]: 2026-04-21 10:22:47.247 [INFO][4068] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.132/26] IPv6=[] ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" HandleID="k8s-pod-network.033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:22:47.305698 containerd[1457]: 2026-04-21 10:22:47.266 [INFO][4018] cni-plugin/k8s.go 418: Populated endpoint ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Namespace="calico-system" Pod="goldmane-cccfbd5cf-sp5tn" WorkloadEndpoint="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"11bf43e9-9c35-41d9-833a-203a96bf4b43", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"", Pod:"goldmane-cccfbd5cf-sp5tn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1ed7f3576a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.305698 containerd[1457]: 2026-04-21 10:22:47.266 [INFO][4018] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.132/32] ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Namespace="calico-system" Pod="goldmane-cccfbd5cf-sp5tn" WorkloadEndpoint="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:22:47.305698 containerd[1457]: 2026-04-21 10:22:47.266 [INFO][4018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ed7f3576a6 ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Namespace="calico-system" Pod="goldmane-cccfbd5cf-sp5tn" WorkloadEndpoint="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:22:47.305698 containerd[1457]: 2026-04-21 10:22:47.273 [INFO][4018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Namespace="calico-system" Pod="goldmane-cccfbd5cf-sp5tn" WorkloadEndpoint="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:22:47.305698 containerd[1457]: 2026-04-21 10:22:47.278 [INFO][4018] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Namespace="calico-system" Pod="goldmane-cccfbd5cf-sp5tn" WorkloadEndpoint="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"11bf43e9-9c35-41d9-833a-203a96bf4b43", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677", Pod:"goldmane-cccfbd5cf-sp5tn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1ed7f3576a6", MAC:"52:39:98:97:a6:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.305698 containerd[1457]: 2026-04-21 10:22:47.295 [INFO][4018] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677" Namespace="calico-system" Pod="goldmane-cccfbd5cf-sp5tn" WorkloadEndpoint="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:22:47.334388 systemd[1]: Started cri-containerd-524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e.scope - libcontainer container 524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e. Apr 21 10:22:47.371996 containerd[1457]: time="2026-04-21T10:22:47.371926343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:47.372316 containerd[1457]: time="2026-04-21T10:22:47.372158656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:47.372316 containerd[1457]: time="2026-04-21T10:22:47.372243337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:47.372688 containerd[1457]: time="2026-04-21T10:22:47.372641791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:47.393188 systemd[1]: Started cri-containerd-939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8.scope - libcontainer container 939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8. Apr 21 10:22:47.407545 containerd[1457]: time="2026-04-21T10:22:47.407219767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c59b47f6-2xpc9,Uid:67e67654-5dde-42ee-bb8f-153e15e8d632,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:47.408816 systemd[1]: Started cri-containerd-033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677.scope - libcontainer container 033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677. Apr 21 10:22:47.479146 systemd-networkd[1382]: caliaee47bb1c9a: Link UP Apr 21 10:22:47.486663 systemd-networkd[1382]: caliaee47bb1c9a: Gained carrier Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:46.839 [ERROR][3943] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:46.980 [INFO][3943] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0 coredns-66bc5c9577- kube-system bfea7c85-36f8-4c6c-8806-1d5305a3e058 933 0 2026-04-21 10:22:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-196-117 coredns-66bc5c9577-hmkdz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaee47bb1c9a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Namespace="kube-system" Pod="coredns-66bc5c9577-hmkdz" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:46.980 [INFO][3943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Namespace="kube-system" Pod="coredns-66bc5c9577-hmkdz" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.156 [INFO][4079] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" HandleID="k8s-pod-network.9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.182 [INFO][4079] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" HandleID="k8s-pod-network.9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277430), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-196-117", "pod":"coredns-66bc5c9577-hmkdz", "timestamp":"2026-04-21 10:22:47.156775501 +0000 UTC"}, Hostname:"172-234-196-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000115340)} Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.182 [INFO][4079] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.248 [INFO][4079] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.249 [INFO][4079] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-196-117' Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.303 [INFO][4079] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" host="172-234-196-117" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.314 [INFO][4079] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-196-117" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.353 [INFO][4079] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.360 [INFO][4079] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.369 [INFO][4079] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.369 [INFO][4079] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" host="172-234-196-117" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.373 [INFO][4079] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252 Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.413 [INFO][4079] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" host="172-234-196-117" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.432 [INFO][4079] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.133/26] block=192.168.71.128/26 handle="k8s-pod-network.9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" host="172-234-196-117" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.433 [INFO][4079] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.133/26] handle="k8s-pod-network.9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" host="172-234-196-117" Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.434 [INFO][4079] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:47.516349 containerd[1457]: 2026-04-21 10:22:47.435 [INFO][4079] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.133/26] IPv6=[] ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" HandleID="k8s-pod-network.9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:22:47.518652 containerd[1457]: 2026-04-21 10:22:47.466 [INFO][3943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Namespace="kube-system" Pod="coredns-66bc5c9577-hmkdz" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bfea7c85-36f8-4c6c-8806-1d5305a3e058", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"", Pod:"coredns-66bc5c9577-hmkdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaee47bb1c9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.518652 containerd[1457]: 2026-04-21 10:22:47.466 [INFO][3943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.133/32] ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Namespace="kube-system" Pod="coredns-66bc5c9577-hmkdz" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:22:47.518652 containerd[1457]: 2026-04-21 10:22:47.466 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaee47bb1c9a ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Namespace="kube-system" Pod="coredns-66bc5c9577-hmkdz" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:22:47.518652 containerd[1457]: 2026-04-21 10:22:47.489 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Namespace="kube-system" Pod="coredns-66bc5c9577-hmkdz" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:22:47.518652 containerd[1457]: 2026-04-21 10:22:47.491 [INFO][3943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Namespace="kube-system" Pod="coredns-66bc5c9577-hmkdz" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bfea7c85-36f8-4c6c-8806-1d5305a3e058", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252", Pod:"coredns-66bc5c9577-hmkdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaee47bb1c9a", MAC:"2e:d1:cc:8e:e2:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.518652 containerd[1457]: 2026-04-21 10:22:47.508 [INFO][3943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252" Namespace="kube-system" Pod="coredns-66bc5c9577-hmkdz" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:22:47.550255 containerd[1457]: time="2026-04-21T10:22:47.549810099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:47.550255 containerd[1457]: time="2026-04-21T10:22:47.549876610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:47.550255 containerd[1457]: time="2026-04-21T10:22:47.549901360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:47.550255 containerd[1457]: time="2026-04-21T10:22:47.549993301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:47.576224 systemd[1]: Started cri-containerd-9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252.scope - libcontainer container 9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252. Apr 21 10:22:47.613229 systemd-networkd[1382]: cali3425be06179: Link UP Apr 21 10:22:47.615182 systemd-networkd[1382]: cali3425be06179: Gained carrier Apr 21 10:22:47.683787 kubelet[2564]: I0421 10:22:47.683758 2564 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee06722d-90c6-4155-8731-f921f1677bd3" path="/var/lib/kubelet/pods/ee06722d-90c6-4155-8731-f921f1677bd3/volumes" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:46.864 [ERROR][3980] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:46.890 [INFO][3980] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0 calico-apiserver-7c885d8ccf- calico-system 103eb8aa-b6c6-480d-a370-786769ae65a2 938 0 2026-04-21 10:22:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c885d8ccf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-196-117 calico-apiserver-7c885d8ccf-njvhb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3425be06179 [] [] }} ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-njvhb" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:46.890 [INFO][3980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-njvhb" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.158 [INFO][4065] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" HandleID="k8s-pod-network.cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.182 [INFO][4065] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" HandleID="k8s-pod-network.cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000410170), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-196-117", "pod":"calico-apiserver-7c885d8ccf-njvhb", "timestamp":"2026-04-21 10:22:47.158851694 +0000 UTC"}, Hostname:"172-234-196-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003346e0)} Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.183 [INFO][4065] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.437 [INFO][4065] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.437 [INFO][4065] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-196-117' Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.441 [INFO][4065] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" host="172-234-196-117" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.462 [INFO][4065] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-196-117" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.483 [INFO][4065] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.497 [INFO][4065] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.516 [INFO][4065] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.518 [INFO][4065] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" host="172-234-196-117" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.523 [INFO][4065] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690 Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.538 [INFO][4065] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" host="172-234-196-117" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.552 [INFO][4065] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.134/26] block=192.168.71.128/26 handle="k8s-pod-network.cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" host="172-234-196-117" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.552 [INFO][4065] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.134/26] handle="k8s-pod-network.cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" host="172-234-196-117" Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.552 [INFO][4065] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:47.686179 containerd[1457]: 2026-04-21 10:22:47.552 [INFO][4065] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.134/26] IPv6=[] ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" HandleID="k8s-pod-network.cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:22:47.686854 containerd[1457]: 2026-04-21 10:22:47.598 [INFO][3980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-njvhb" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0", GenerateName:"calico-apiserver-7c885d8ccf-", Namespace:"calico-system", SelfLink:"", UID:"103eb8aa-b6c6-480d-a370-786769ae65a2", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c885d8ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"", Pod:"calico-apiserver-7c885d8ccf-njvhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3425be06179", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.686854 containerd[1457]: 2026-04-21 10:22:47.599 [INFO][3980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.134/32] ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-njvhb" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:22:47.686854 containerd[1457]: 2026-04-21 10:22:47.599 [INFO][3980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3425be06179 ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-njvhb" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:22:47.686854 containerd[1457]: 2026-04-21 10:22:47.630 [INFO][3980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-njvhb" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:22:47.686854 containerd[1457]: 2026-04-21 10:22:47.636 [INFO][3980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-njvhb" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0", GenerateName:"calico-apiserver-7c885d8ccf-", Namespace:"calico-system", SelfLink:"", UID:"103eb8aa-b6c6-480d-a370-786769ae65a2", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c885d8ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690", Pod:"calico-apiserver-7c885d8ccf-njvhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3425be06179", MAC:"06:91:e5:bf:20:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.686854 containerd[1457]: 2026-04-21 10:22:47.666 [INFO][3980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690" Namespace="calico-system" Pod="calico-apiserver-7c885d8ccf-njvhb" WorkloadEndpoint="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:22:47.702293 systemd-networkd[1382]: cali1421ecd26f8: Link UP Apr 21 10:22:47.705157 systemd-networkd[1382]: cali1421ecd26f8: Gained carrier Apr 21 10:22:47.720652 containerd[1457]: time="2026-04-21T10:22:47.720611714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hmkdz,Uid:bfea7c85-36f8-4c6c-8806-1d5305a3e058,Namespace:kube-system,Attempt:1,} returns sandbox id \"9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252\"" Apr 21 10:22:47.723668 kubelet[2564]: E0421 10:22:47.723640 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:47.735109 containerd[1457]: time="2026-04-21T10:22:47.734918558Z" level=info msg="CreateContainer within sandbox \"9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:22:47.775268 containerd[1457]: time="2026-04-21T10:22:47.771120302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:47.775268 containerd[1457]: time="2026-04-21T10:22:47.771176733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:47.775268 containerd[1457]: time="2026-04-21T10:22:47.771187803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:47.775268 containerd[1457]: time="2026-04-21T10:22:47.771269044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:46.990 [ERROR][3998] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.101 [INFO][3998] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0 coredns-66bc5c9577- kube-system d4453451-982f-4725-af74-0e6e82cae9ec 939 0 2026-04-21 10:22:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-196-117 coredns-66bc5c9577-gk6tg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1421ecd26f8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Namespace="kube-system" Pod="coredns-66bc5c9577-gk6tg" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.108 [INFO][3998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Namespace="kube-system" Pod="coredns-66bc5c9577-gk6tg" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.248 [INFO][4091] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" HandleID="k8s-pod-network.03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.258 [INFO][4091] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" HandleID="k8s-pod-network.03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-196-117", "pod":"coredns-66bc5c9577-gk6tg", "timestamp":"2026-04-21 10:22:47.248233437 +0000 UTC"}, Hostname:"172-234-196-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000114840)} Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.258 [INFO][4091] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.554 [INFO][4091] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.554 [INFO][4091] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-196-117' Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.565 [INFO][4091] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" host="172-234-196-117" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.586 [INFO][4091] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-196-117" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.600 [INFO][4091] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.605 [INFO][4091] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.619 [INFO][4091] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.619 [INFO][4091] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" host="172-234-196-117" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.622 [INFO][4091] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423 Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.636 [INFO][4091] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" host="172-234-196-117" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.658 [INFO][4091] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.135/26] block=192.168.71.128/26 handle="k8s-pod-network.03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" host="172-234-196-117" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.658 [INFO][4091] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.135/26] handle="k8s-pod-network.03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" host="172-234-196-117" Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.658 [INFO][4091] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:47.777900 containerd[1457]: 2026-04-21 10:22:47.658 [INFO][4091] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.135/26] IPv6=[] ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" HandleID="k8s-pod-network.03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:22:47.778682 containerd[1457]: 2026-04-21 10:22:47.685 [INFO][3998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Namespace="kube-system" Pod="coredns-66bc5c9577-gk6tg" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4453451-982f-4725-af74-0e6e82cae9ec", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"", Pod:"coredns-66bc5c9577-gk6tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1421ecd26f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.778682 containerd[1457]: 2026-04-21 10:22:47.685 [INFO][3998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.135/32] ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Namespace="kube-system" Pod="coredns-66bc5c9577-gk6tg" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:22:47.778682 containerd[1457]: 2026-04-21 10:22:47.685 [INFO][3998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1421ecd26f8 ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Namespace="kube-system" Pod="coredns-66bc5c9577-gk6tg" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:22:47.778682 containerd[1457]: 2026-04-21 10:22:47.713 [INFO][3998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Namespace="kube-system" Pod="coredns-66bc5c9577-gk6tg" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:22:47.778682 containerd[1457]: 2026-04-21 10:22:47.715 [INFO][3998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Namespace="kube-system" Pod="coredns-66bc5c9577-gk6tg" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4453451-982f-4725-af74-0e6e82cae9ec", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423", Pod:"coredns-66bc5c9577-gk6tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1421ecd26f8", MAC:"76:ce:50:e7:13:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.778682 containerd[1457]: 2026-04-21 10:22:47.737 [INFO][3998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423" Namespace="kube-system" Pod="coredns-66bc5c9577-gk6tg" WorkloadEndpoint="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:22:47.788195 systemd-networkd[1382]: cali48dd5b50cb4: Gained IPv6LL Apr 21 10:22:47.794630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821185652.mount: Deactivated successfully. Apr 21 10:22:47.803701 containerd[1457]: time="2026-04-21T10:22:47.803671255Z" level=info msg="CreateContainer within sandbox \"9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21f060540f998f3468b60a4085f9a812516cd316352533554b2227f449dd3716\"" Apr 21 10:22:47.806531 containerd[1457]: time="2026-04-21T10:22:47.806511107Z" level=info msg="StartContainer for \"21f060540f998f3468b60a4085f9a812516cd316352533554b2227f449dd3716\"" Apr 21 10:22:47.839096 containerd[1457]: time="2026-04-21T10:22:47.838894718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c885d8ccf-6qxjm,Uid:e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0,Namespace:calico-system,Attempt:1,} returns sandbox id \"524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e\"" Apr 21 10:22:47.857311 systemd-networkd[1382]: cali234662678be: Link UP Apr 21 10:22:47.869550 systemd-networkd[1382]: cali234662678be: Gained carrier Apr 21 10:22:47.871569 systemd[1]: Started cri-containerd-cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690.scope - libcontainer container cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690. Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.456 [ERROR][4208] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.474 [INFO][4208] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0 whisker-78c59b47f6- calico-system 67e67654-5dde-42ee-bb8f-153e15e8d632 963 0 2026-04-21 10:22:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78c59b47f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-196-117 whisker-78c59b47f6-2xpc9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali234662678be [] [] }} ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Namespace="calico-system" Pod="whisker-78c59b47f6-2xpc9" WorkloadEndpoint="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.474 [INFO][4208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Namespace="calico-system" Pod="whisker-78c59b47f6-2xpc9" WorkloadEndpoint="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.695 [INFO][4228] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" HandleID="k8s-pod-network.065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Workload="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.707 [INFO][4228] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" HandleID="k8s-pod-network.065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Workload="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ec50), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-196-117", "pod":"whisker-78c59b47f6-2xpc9", "timestamp":"2026-04-21 10:22:47.695238364 +0000 UTC"}, Hostname:"172-234-196-117", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000464420)} Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.707 [INFO][4228] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.707 [INFO][4228] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.707 [INFO][4228] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-196-117' Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.714 [INFO][4228] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" host="172-234-196-117" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.722 [INFO][4228] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-196-117" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.731 [INFO][4228] ipam/ipam.go 526: Trying affinity for 192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.743 [INFO][4228] ipam/ipam.go 160: Attempting to load block cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.746 [INFO][4228] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.71.128/26 host="172-234-196-117" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.746 [INFO][4228] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.71.128/26 handle="k8s-pod-network.065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" host="172-234-196-117" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.749 [INFO][4228] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54 Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.759 [INFO][4228] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.71.128/26 handle="k8s-pod-network.065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" host="172-234-196-117" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.782 [INFO][4228] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.71.136/26] block=192.168.71.128/26 handle="k8s-pod-network.065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" host="172-234-196-117" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.786 [INFO][4228] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.71.136/26] handle="k8s-pod-network.065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" host="172-234-196-117" Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.786 [INFO][4228] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:47.938817 containerd[1457]: 2026-04-21 10:22:47.786 [INFO][4228] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.71.136/26] IPv6=[] ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" HandleID="k8s-pod-network.065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Workload="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0" Apr 21 10:22:47.939845 containerd[1457]: 2026-04-21 10:22:47.833 [INFO][4208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Namespace="calico-system" Pod="whisker-78c59b47f6-2xpc9" WorkloadEndpoint="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0", GenerateName:"whisker-78c59b47f6-", Namespace:"calico-system", SelfLink:"", UID:"67e67654-5dde-42ee-bb8f-153e15e8d632", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78c59b47f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"", Pod:"whisker-78c59b47f6-2xpc9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali234662678be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.939845 containerd[1457]: 2026-04-21 10:22:47.833 [INFO][4208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.71.136/32] ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Namespace="calico-system" Pod="whisker-78c59b47f6-2xpc9" WorkloadEndpoint="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0" Apr 21 10:22:47.939845 containerd[1457]: 2026-04-21 10:22:47.833 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali234662678be ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Namespace="calico-system" Pod="whisker-78c59b47f6-2xpc9" WorkloadEndpoint="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0" Apr 21 10:22:47.939845 containerd[1457]: 2026-04-21 10:22:47.893 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Namespace="calico-system" Pod="whisker-78c59b47f6-2xpc9" WorkloadEndpoint="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0" Apr 21 10:22:47.939845 containerd[1457]: 2026-04-21 10:22:47.895 [INFO][4208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Namespace="calico-system" Pod="whisker-78c59b47f6-2xpc9" WorkloadEndpoint="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0", GenerateName:"whisker-78c59b47f6-", Namespace:"calico-system", SelfLink:"", UID:"67e67654-5dde-42ee-bb8f-153e15e8d632", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78c59b47f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54", Pod:"whisker-78c59b47f6-2xpc9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.71.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali234662678be", MAC:"52:76:ef:e6:57:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:47.939845 containerd[1457]: 2026-04-21 10:22:47.912 [INFO][4208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54" Namespace="calico-system" Pod="whisker-78c59b47f6-2xpc9" WorkloadEndpoint="172--234--196--117-k8s-whisker--78c59b47f6--2xpc9-eth0" Apr 21 10:22:47.956422 containerd[1457]: time="2026-04-21T10:22:47.956358413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-sp5tn,Uid:11bf43e9-9c35-41d9-833a-203a96bf4b43,Namespace:calico-system,Attempt:1,} returns sandbox id \"033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677\"" Apr 21 10:22:47.998868 systemd[1]: Started cri-containerd-21f060540f998f3468b60a4085f9a812516cd316352533554b2227f449dd3716.scope - libcontainer container 21f060540f998f3468b60a4085f9a812516cd316352533554b2227f449dd3716. Apr 21 10:22:48.000560 containerd[1457]: time="2026-04-21T10:22:47.981225257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:48.000560 containerd[1457]: time="2026-04-21T10:22:47.981283248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:48.000560 containerd[1457]: time="2026-04-21T10:22:47.981307618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:48.000560 containerd[1457]: time="2026-04-21T10:22:47.982283709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:48.026399 systemd[1]: Started cri-containerd-03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423.scope - libcontainer container 03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423. Apr 21 10:22:48.057018 containerd[1457]: time="2026-04-21T10:22:48.055931493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c59f49bff-8gnf6,Uid:2a413923-7171-4cd9-86b6-66566674315f,Namespace:calico-system,Attempt:1,} returns sandbox id \"939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8\"" Apr 21 10:22:48.084952 containerd[1457]: time="2026-04-21T10:22:48.083576260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:48.084952 containerd[1457]: time="2026-04-21T10:22:48.083645240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:48.084952 containerd[1457]: time="2026-04-21T10:22:48.083658281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:48.084952 containerd[1457]: time="2026-04-21T10:22:48.083741921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:48.111087 kernel: calico-node[3875]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:22:48.133320 containerd[1457]: time="2026-04-21T10:22:48.130905218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c885d8ccf-njvhb,Uid:103eb8aa-b6c6-480d-a370-786769ae65a2,Namespace:calico-system,Attempt:1,} returns sandbox id \"cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690\"" Apr 21 10:22:48.139266 containerd[1457]: time="2026-04-21T10:22:48.138927704Z" level=info msg="StartContainer for \"21f060540f998f3468b60a4085f9a812516cd316352533554b2227f449dd3716\" returns successfully" Apr 21 10:22:48.186220 systemd[1]: Started cri-containerd-065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54.scope - libcontainer container 065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54. Apr 21 10:22:48.202071 containerd[1457]: time="2026-04-21T10:22:48.200340503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gk6tg,Uid:d4453451-982f-4725-af74-0e6e82cae9ec,Namespace:kube-system,Attempt:1,} returns sandbox id \"03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423\"" Apr 21 10:22:48.202245 kubelet[2564]: E0421 10:22:48.201736 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:48.211123 containerd[1457]: time="2026-04-21T10:22:48.210073847Z" level=info msg="CreateContainer within sandbox \"03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:22:48.249067 containerd[1457]: time="2026-04-21T10:22:48.247185655Z" level=info msg="CreateContainer within sandbox \"03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20050496beb4b0d2be7721cdfcaa37483854f3c7398a089062ea3166b2c7fc0f\"" Apr 21 10:22:48.249366 containerd[1457]: time="2026-04-21T10:22:48.249214637Z" level=info msg="StartContainer for \"20050496beb4b0d2be7721cdfcaa37483854f3c7398a089062ea3166b2c7fc0f\"" Apr 21 10:22:48.319179 systemd[1]: Started cri-containerd-20050496beb4b0d2be7721cdfcaa37483854f3c7398a089062ea3166b2c7fc0f.scope - libcontainer container 20050496beb4b0d2be7721cdfcaa37483854f3c7398a089062ea3166b2c7fc0f. Apr 21 10:22:48.387122 containerd[1457]: time="2026-04-21T10:22:48.387009986Z" level=info msg="StartContainer for \"20050496beb4b0d2be7721cdfcaa37483854f3c7398a089062ea3166b2c7fc0f\" returns successfully" Apr 21 10:22:48.412686 containerd[1457]: time="2026-04-21T10:22:48.412221736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:48.413175 containerd[1457]: time="2026-04-21T10:22:48.413072185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:22:48.414773 containerd[1457]: time="2026-04-21T10:22:48.413642282Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:48.416017 containerd[1457]: time="2026-04-21T10:22:48.415997757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:48.417040 containerd[1457]: time="2026-04-21T10:22:48.416721225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.743108675s" Apr 21 10:22:48.417470 containerd[1457]: time="2026-04-21T10:22:48.417121809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:22:48.419176 containerd[1457]: time="2026-04-21T10:22:48.418914468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:22:48.423119 containerd[1457]: time="2026-04-21T10:22:48.423097423Z" level=info msg="CreateContainer within sandbox \"80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:22:48.471424 containerd[1457]: time="2026-04-21T10:22:48.471339311Z" level=info msg="CreateContainer within sandbox \"80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"42ec3c4f2ad63b4d2104923143e18b04a4ff760ca12d8ec73655d523cb873a72\"" Apr 21 10:22:48.473128 containerd[1457]: time="2026-04-21T10:22:48.472852837Z" level=info msg="StartContainer for \"42ec3c4f2ad63b4d2104923143e18b04a4ff760ca12d8ec73655d523cb873a72\"" Apr 21 10:22:48.473487 containerd[1457]: time="2026-04-21T10:22:48.473411553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c59b47f6-2xpc9,Uid:67e67654-5dde-42ee-bb8f-153e15e8d632,Namespace:calico-system,Attempt:0,} returns sandbox id \"065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54\"" Apr 21 10:22:48.518224 systemd[1]: Started cri-containerd-42ec3c4f2ad63b4d2104923143e18b04a4ff760ca12d8ec73655d523cb873a72.scope - libcontainer container 42ec3c4f2ad63b4d2104923143e18b04a4ff760ca12d8ec73655d523cb873a72. Apr 21 10:22:48.575840 containerd[1457]: time="2026-04-21T10:22:48.575742361Z" level=info msg="StartContainer for \"42ec3c4f2ad63b4d2104923143e18b04a4ff760ca12d8ec73655d523cb873a72\" returns successfully" Apr 21 10:22:48.748172 systemd-networkd[1382]: cali1ed7f3576a6: Gained IPv6LL Apr 21 10:22:48.751143 systemd-networkd[1382]: caliaee47bb1c9a: Gained IPv6LL Apr 21 10:22:48.876258 systemd-networkd[1382]: cali18244db7ee1: Gained IPv6LL Apr 21 10:22:48.920868 kubelet[2564]: E0421 10:22:48.920836 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:48.924503 kubelet[2564]: E0421 10:22:48.924476 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:48.942018 systemd-networkd[1382]: cali1f7288f1c56: Gained IPv6LL Apr 21 10:22:48.943558 kubelet[2564]: I0421 10:22:48.942961 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hmkdz" podStartSLOduration=24.942947771 podStartE2EDuration="24.942947771s" podCreationTimestamp="2026-04-21 10:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:48.940833459 +0000 UTC m=+31.372979212" watchObservedRunningTime="2026-04-21 10:22:48.942947771 +0000 UTC m=+31.375093514" Apr 21 10:22:48.960007 kubelet[2564]: I0421 10:22:48.959897 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gk6tg" podStartSLOduration=24.959884573 podStartE2EDuration="24.959884573s" podCreationTimestamp="2026-04-21 10:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:48.959461669 +0000 UTC m=+31.391607412" watchObservedRunningTime="2026-04-21 10:22:48.959884573 +0000 UTC m=+31.392030316" Apr 21 10:22:49.027835 systemd-networkd[1382]: vxlan.calico: Link UP Apr 21 10:22:49.027844 systemd-networkd[1382]: vxlan.calico: Gained carrier Apr 21 10:22:49.068202 systemd-networkd[1382]: cali3425be06179: Gained IPv6LL Apr 21 10:22:49.708306 systemd-networkd[1382]: cali1421ecd26f8: Gained IPv6LL Apr 21 10:22:49.709202 systemd-networkd[1382]: cali234662678be: Gained IPv6LL Apr 21 10:22:49.940838 kubelet[2564]: E0421 10:22:49.940490 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:49.940838 kubelet[2564]: E0421 10:22:49.940748 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:50.177582 containerd[1457]: time="2026-04-21T10:22:50.177524889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:50.178477 containerd[1457]: time="2026-04-21T10:22:50.178328657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:22:50.180086 containerd[1457]: time="2026-04-21T10:22:50.179093024Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:50.182076 containerd[1457]: time="2026-04-21T10:22:50.181418776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:50.182808 containerd[1457]: time="2026-04-21T10:22:50.182386785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.763445857s" Apr 21 10:22:50.182808 containerd[1457]: time="2026-04-21T10:22:50.182430756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:22:50.184684 containerd[1457]: time="2026-04-21T10:22:50.184648286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:22:50.186997 containerd[1457]: time="2026-04-21T10:22:50.186962638Z" level=info msg="CreateContainer within sandbox \"524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:22:50.200218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount600428025.mount: Deactivated successfully. Apr 21 10:22:50.202906 containerd[1457]: time="2026-04-21T10:22:50.202869518Z" level=info msg="CreateContainer within sandbox \"524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5d9fc41b39572ecd40a8b06d9b6c2bee68ba9cb4b4db7b4b90632e20dcd34645\"" Apr 21 10:22:50.204369 containerd[1457]: time="2026-04-21T10:22:50.203413593Z" level=info msg="StartContainer for \"5d9fc41b39572ecd40a8b06d9b6c2bee68ba9cb4b4db7b4b90632e20dcd34645\"" Apr 21 10:22:50.269171 systemd[1]: Started cri-containerd-5d9fc41b39572ecd40a8b06d9b6c2bee68ba9cb4b4db7b4b90632e20dcd34645.scope - libcontainer container 5d9fc41b39572ecd40a8b06d9b6c2bee68ba9cb4b4db7b4b90632e20dcd34645. Apr 21 10:22:50.316214 containerd[1457]: time="2026-04-21T10:22:50.316160347Z" level=info msg="StartContainer for \"5d9fc41b39572ecd40a8b06d9b6c2bee68ba9cb4b4db7b4b90632e20dcd34645\" returns successfully" Apr 21 10:22:50.668199 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Apr 21 10:22:50.957507 kubelet[2564]: E0421 10:22:50.957360 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:50.959397 kubelet[2564]: E0421 10:22:50.958263 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:22:50.985374 kubelet[2564]: I0421 10:22:50.982130 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c885d8ccf-6qxjm" podStartSLOduration=16.68111844 podStartE2EDuration="18.982116568s" podCreationTimestamp="2026-04-21 10:22:32 +0000 UTC" firstStartedPulling="2026-04-21 10:22:47.883023403 +0000 UTC m=+30.315169146" lastFinishedPulling="2026-04-21 10:22:50.184021531 +0000 UTC m=+32.616167274" observedRunningTime="2026-04-21 10:22:50.980791085 +0000 UTC m=+33.412936828" watchObservedRunningTime="2026-04-21 10:22:50.982116568 +0000 UTC m=+33.414262311" Apr 21 10:22:51.512794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196584041.mount: Deactivated successfully. Apr 21 10:22:51.902486 containerd[1457]: time="2026-04-21T10:22:51.902433586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:51.903623 containerd[1457]: time="2026-04-21T10:22:51.903569206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:22:51.904126 containerd[1457]: time="2026-04-21T10:22:51.904095771Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:51.913227 containerd[1457]: time="2026-04-21T10:22:51.912625396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:51.914310 containerd[1457]: time="2026-04-21T10:22:51.914279851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.729453892s" Apr 21 10:22:51.914310 containerd[1457]: time="2026-04-21T10:22:51.914310471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:22:51.917173 containerd[1457]: time="2026-04-21T10:22:51.917154146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:22:51.921021 containerd[1457]: time="2026-04-21T10:22:51.920981650Z" level=info msg="CreateContainer within sandbox \"033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:22:51.933951 containerd[1457]: time="2026-04-21T10:22:51.933811294Z" level=info msg="CreateContainer within sandbox \"033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ebc8b8f3a7ee90733b3cb1575a9d048f88387769a1d347fb2fc475ac158cd7d2\"" Apr 21 10:22:51.935009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223545145.mount: Deactivated successfully. Apr 21 10:22:51.939099 containerd[1457]: time="2026-04-21T10:22:51.937090002Z" level=info msg="StartContainer for \"ebc8b8f3a7ee90733b3cb1575a9d048f88387769a1d347fb2fc475ac158cd7d2\"" Apr 21 10:22:51.965696 kubelet[2564]: I0421 10:22:51.965171 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:51.998647 systemd[1]: Started cri-containerd-ebc8b8f3a7ee90733b3cb1575a9d048f88387769a1d347fb2fc475ac158cd7d2.scope - libcontainer container ebc8b8f3a7ee90733b3cb1575a9d048f88387769a1d347fb2fc475ac158cd7d2. Apr 21 10:22:52.048458 containerd[1457]: time="2026-04-21T10:22:52.048297440Z" level=info msg="StartContainer for \"ebc8b8f3a7ee90733b3cb1575a9d048f88387769a1d347fb2fc475ac158cd7d2\" returns successfully" Apr 21 10:22:53.975347 kubelet[2564]: I0421 10:22:53.975321 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:54.069757 containerd[1457]: time="2026-04-21T10:22:54.068650480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:54.069757 containerd[1457]: time="2026-04-21T10:22:54.069571197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:22:54.069757 containerd[1457]: time="2026-04-21T10:22:54.069702138Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:54.071831 containerd[1457]: time="2026-04-21T10:22:54.071800993Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:54.072564 containerd[1457]: time="2026-04-21T10:22:54.072531219Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.155351772s" Apr 21 10:22:54.072612 containerd[1457]: time="2026-04-21T10:22:54.072563309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:22:54.075297 containerd[1457]: time="2026-04-21T10:22:54.075274738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:22:54.094812 containerd[1457]: time="2026-04-21T10:22:54.094769641Z" level=info msg="CreateContainer within sandbox \"939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:22:54.110798 containerd[1457]: time="2026-04-21T10:22:54.110744487Z" level=info msg="CreateContainer within sandbox \"939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6c65f23675807fc73990e87d3f1e290dbd59383523af75f8feab6fa5596414f6\"" Apr 21 10:22:54.115116 containerd[1457]: time="2026-04-21T10:22:54.113246085Z" level=info msg="StartContainer for \"6c65f23675807fc73990e87d3f1e290dbd59383523af75f8feab6fa5596414f6\"" Apr 21 10:22:54.170334 systemd[1]: Started cri-containerd-6c65f23675807fc73990e87d3f1e290dbd59383523af75f8feab6fa5596414f6.scope - libcontainer container 6c65f23675807fc73990e87d3f1e290dbd59383523af75f8feab6fa5596414f6. Apr 21 10:22:54.222307 containerd[1457]: time="2026-04-21T10:22:54.222262259Z" level=info msg="StartContainer for \"6c65f23675807fc73990e87d3f1e290dbd59383523af75f8feab6fa5596414f6\" returns successfully" Apr 21 10:22:54.263333 containerd[1457]: time="2026-04-21T10:22:54.263232638Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:54.264312 containerd[1457]: time="2026-04-21T10:22:54.264264875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:22:54.266391 containerd[1457]: time="2026-04-21T10:22:54.266368261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 190.944141ms" Apr 21 10:22:54.266590 containerd[1457]: time="2026-04-21T10:22:54.266481141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:22:54.268081 containerd[1457]: time="2026-04-21T10:22:54.267711441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:22:54.272268 containerd[1457]: time="2026-04-21T10:22:54.272239413Z" level=info msg="CreateContainer within sandbox \"cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:22:54.281702 containerd[1457]: time="2026-04-21T10:22:54.281649152Z" level=info msg="CreateContainer within sandbox \"cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ab8636f10f4ff554a13cc5ec3b205948378592fcb153b86fc12634180426bc51\"" Apr 21 10:22:54.283594 containerd[1457]: time="2026-04-21T10:22:54.282620889Z" level=info msg="StartContainer for \"ab8636f10f4ff554a13cc5ec3b205948378592fcb153b86fc12634180426bc51\"" Apr 21 10:22:54.340218 systemd[1]: Started cri-containerd-ab8636f10f4ff554a13cc5ec3b205948378592fcb153b86fc12634180426bc51.scope - libcontainer container ab8636f10f4ff554a13cc5ec3b205948378592fcb153b86fc12634180426bc51. Apr 21 10:22:54.393930 containerd[1457]: time="2026-04-21T10:22:54.393897350Z" level=info msg="StartContainer for \"ab8636f10f4ff554a13cc5ec3b205948378592fcb153b86fc12634180426bc51\" returns successfully" Apr 21 10:22:54.994139 kubelet[2564]: I0421 10:22:54.993861 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-sp5tn" podStartSLOduration=19.03785975 podStartE2EDuration="22.993845401s" podCreationTimestamp="2026-04-21 10:22:32 +0000 UTC" firstStartedPulling="2026-04-21 10:22:47.96054585 +0000 UTC m=+30.392691593" lastFinishedPulling="2026-04-21 10:22:51.916531501 +0000 UTC m=+34.348677244" observedRunningTime="2026-04-21 10:22:52.988455812 +0000 UTC m=+35.420601555" watchObservedRunningTime="2026-04-21 10:22:54.993845401 +0000 UTC m=+37.425991154" Apr 21 10:22:55.000022 kubelet[2564]: I0421 10:22:54.998362 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7c885d8ccf-njvhb" podStartSLOduration=16.865962958 podStartE2EDuration="22.998033771s" podCreationTimestamp="2026-04-21 10:22:32 +0000 UTC" firstStartedPulling="2026-04-21 10:22:48.135308725 +0000 UTC m=+30.567454468" lastFinishedPulling="2026-04-21 10:22:54.267379538 +0000 UTC m=+36.699525281" observedRunningTime="2026-04-21 10:22:54.992779513 +0000 UTC m=+37.424925256" watchObservedRunningTime="2026-04-21 10:22:54.998033771 +0000 UTC m=+37.430179514" Apr 21 10:22:55.118194 kubelet[2564]: I0421 10:22:55.116511 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:55.161940 systemd[1]: run-containerd-runc-k8s.io-7df4a98b04cc4a1266211aa8d0bb7da4f652a4c3b622a8f45874dc3b74eb71d3-runc.6R0S8K.mount: Deactivated successfully. Apr 21 10:22:55.227264 containerd[1457]: time="2026-04-21T10:22:55.227202478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:55.229186 containerd[1457]: time="2026-04-21T10:22:55.229146401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:22:55.230097 containerd[1457]: time="2026-04-21T10:22:55.229918236Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:55.232705 containerd[1457]: time="2026-04-21T10:22:55.232682495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:55.233662 containerd[1457]: time="2026-04-21T10:22:55.233638882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 965.904971ms" Apr 21 10:22:55.234101 containerd[1457]: time="2026-04-21T10:22:55.234036595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:22:55.235811 containerd[1457]: time="2026-04-21T10:22:55.235649616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:22:55.239371 containerd[1457]: time="2026-04-21T10:22:55.239332461Z" level=info msg="CreateContainer within sandbox \"065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:22:55.260673 containerd[1457]: time="2026-04-21T10:22:55.260387494Z" level=info msg="CreateContainer within sandbox \"065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"baadd32cfea08300d925ac374f2350dfb67cd3ed9218a2cadbfcfe9bff1953d5\"" Apr 21 10:22:55.262680 containerd[1457]: time="2026-04-21T10:22:55.262513589Z" level=info msg="StartContainer for \"baadd32cfea08300d925ac374f2350dfb67cd3ed9218a2cadbfcfe9bff1953d5\"" Apr 21 10:22:55.326242 systemd[1]: Started cri-containerd-baadd32cfea08300d925ac374f2350dfb67cd3ed9218a2cadbfcfe9bff1953d5.scope - libcontainer container baadd32cfea08300d925ac374f2350dfb67cd3ed9218a2cadbfcfe9bff1953d5. Apr 21 10:22:55.347588 kubelet[2564]: I0421 10:22:55.347515 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c59f49bff-8gnf6" podStartSLOduration=16.333492235 podStartE2EDuration="22.347497789s" podCreationTimestamp="2026-04-21 10:22:33 +0000 UTC" firstStartedPulling="2026-04-21 10:22:48.059643823 +0000 UTC m=+30.491789566" lastFinishedPulling="2026-04-21 10:22:54.073649377 +0000 UTC m=+36.505795120" observedRunningTime="2026-04-21 10:22:55.011118292 +0000 UTC m=+37.443264065" watchObservedRunningTime="2026-04-21 10:22:55.347497789 +0000 UTC m=+37.779643532" Apr 21 10:22:55.430236 containerd[1457]: time="2026-04-21T10:22:55.430024673Z" level=info msg="StartContainer for \"baadd32cfea08300d925ac374f2350dfb67cd3ed9218a2cadbfcfe9bff1953d5\" returns successfully" Apr 21 10:22:55.992960 kubelet[2564]: I0421 10:22:55.991690 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:55.992960 kubelet[2564]: I0421 10:22:55.992146 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:56.140559 containerd[1457]: time="2026-04-21T10:22:56.140518566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:56.141353 containerd[1457]: time="2026-04-21T10:22:56.141299441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:22:56.142103 containerd[1457]: time="2026-04-21T10:22:56.141665263Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:56.143929 containerd[1457]: time="2026-04-21T10:22:56.143883148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:56.144735 containerd[1457]: time="2026-04-21T10:22:56.144703123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 909.025837ms" Apr 21 10:22:56.144785 containerd[1457]: time="2026-04-21T10:22:56.144734973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:22:56.147173 containerd[1457]: time="2026-04-21T10:22:56.147143538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:22:56.150546 containerd[1457]: time="2026-04-21T10:22:56.150522370Z" level=info msg="CreateContainer within sandbox \"80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:22:56.167235 containerd[1457]: time="2026-04-21T10:22:56.167142537Z" level=info msg="CreateContainer within sandbox \"80ede6bd84fd12df02abb7b9c387101fe92ef4c9f069ad3ea2230cdc14f30353\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7ef6e0308c21ef7da1987ac8d941cf8aeee6756287562815b5195c8cc772481b\"" Apr 21 10:22:56.169190 containerd[1457]: time="2026-04-21T10:22:56.167711770Z" level=info msg="StartContainer for \"7ef6e0308c21ef7da1987ac8d941cf8aeee6756287562815b5195c8cc772481b\"" Apr 21 10:22:56.214204 systemd[1]: Started cri-containerd-7ef6e0308c21ef7da1987ac8d941cf8aeee6756287562815b5195c8cc772481b.scope - libcontainer container 7ef6e0308c21ef7da1987ac8d941cf8aeee6756287562815b5195c8cc772481b. Apr 21 10:22:56.263626 containerd[1457]: time="2026-04-21T10:22:56.263426723Z" level=info msg="StartContainer for \"7ef6e0308c21ef7da1987ac8d941cf8aeee6756287562815b5195c8cc772481b\" returns successfully" Apr 21 10:22:56.752735 kubelet[2564]: I0421 10:22:56.752635 2564 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:22:56.755043 kubelet[2564]: I0421 10:22:56.754568 2564 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:22:57.108794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3220795710.mount: Deactivated successfully. Apr 21 10:22:57.121767 containerd[1457]: time="2026-04-21T10:22:57.121717910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:57.122593 containerd[1457]: time="2026-04-21T10:22:57.122439924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:22:57.124096 containerd[1457]: time="2026-04-21T10:22:57.123169559Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:57.126493 containerd[1457]: time="2026-04-21T10:22:57.126425318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:57.127319 containerd[1457]: time="2026-04-21T10:22:57.127274954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 980.096304ms" Apr 21 10:22:57.127370 containerd[1457]: time="2026-04-21T10:22:57.127322804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:22:57.131932 containerd[1457]: time="2026-04-21T10:22:57.131902051Z" level=info msg="CreateContainer within sandbox \"065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:22:57.155192 containerd[1457]: time="2026-04-21T10:22:57.155135091Z" level=info msg="CreateContainer within sandbox \"065fe54740e4d8fb04f2c6d391bfc5fddbf289a147bfb851c0a71aa49a33ed54\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"bf0aa40a89c43307dfabb4a22c21feb57f1ac40a8a8baceaf9e69da0241e78ec\"" Apr 21 10:22:57.156023 containerd[1457]: time="2026-04-21T10:22:57.155985226Z" level=info msg="StartContainer for \"bf0aa40a89c43307dfabb4a22c21feb57f1ac40a8a8baceaf9e69da0241e78ec\"" Apr 21 10:22:57.199831 systemd[1]: Started cri-containerd-bf0aa40a89c43307dfabb4a22c21feb57f1ac40a8a8baceaf9e69da0241e78ec.scope - libcontainer container bf0aa40a89c43307dfabb4a22c21feb57f1ac40a8a8baceaf9e69da0241e78ec. Apr 21 10:22:57.256222 containerd[1457]: time="2026-04-21T10:22:57.256011896Z" level=info msg="StartContainer for \"bf0aa40a89c43307dfabb4a22c21feb57f1ac40a8a8baceaf9e69da0241e78ec\" returns successfully" Apr 21 10:22:58.015299 kubelet[2564]: I0421 10:22:58.015231 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-84zl9" podStartSLOduration=15.541554512 podStartE2EDuration="25.015210668s" podCreationTimestamp="2026-04-21 10:22:33 +0000 UTC" firstStartedPulling="2026-04-21 10:22:46.672452506 +0000 UTC m=+29.104598249" lastFinishedPulling="2026-04-21 10:22:56.146108662 +0000 UTC m=+38.578254405" observedRunningTime="2026-04-21 10:22:57.019350596 +0000 UTC m=+39.451496349" watchObservedRunningTime="2026-04-21 10:22:58.015210668 +0000 UTC m=+40.447356411" Apr 21 10:22:58.016005 kubelet[2564]: I0421 10:22:58.015715 2564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-78c59b47f6-2xpc9" podStartSLOduration=2.373176845 podStartE2EDuration="11.015704541s" podCreationTimestamp="2026-04-21 10:22:47 +0000 UTC" firstStartedPulling="2026-04-21 10:22:48.485416902 +0000 UTC m=+30.917562645" lastFinishedPulling="2026-04-21 10:22:57.127944598 +0000 UTC m=+39.560090341" observedRunningTime="2026-04-21 10:22:58.014978327 +0000 UTC m=+40.447124070" watchObservedRunningTime="2026-04-21 10:22:58.015704541 +0000 UTC m=+40.447850304" Apr 21 10:23:03.324287 kubelet[2564]: I0421 10:23:03.323869 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:23:03.406230 systemd[1]: run-containerd-runc-k8s.io-6c65f23675807fc73990e87d3f1e290dbd59383523af75f8feab6fa5596414f6-runc.5BGgXD.mount: Deactivated successfully. Apr 21 10:23:05.445570 kubelet[2564]: I0421 10:23:05.445136 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:23:05.476984 systemd[1]: run-containerd-runc-k8s.io-ebc8b8f3a7ee90733b3cb1575a9d048f88387769a1d347fb2fc475ac158cd7d2-runc.qniYsu.mount: Deactivated successfully. Apr 21 10:23:07.809647 systemd[1]: run-containerd-runc-k8s.io-ebc8b8f3a7ee90733b3cb1575a9d048f88387769a1d347fb2fc475ac158cd7d2-runc.K99ysE.mount: Deactivated successfully. Apr 21 10:23:17.554435 kubelet[2564]: I0421 10:23:17.553893 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:23:17.664743 containerd[1457]: time="2026-04-21T10:23:17.664231442Z" level=info msg="StopPodSandbox for \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\"" Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.715 [WARNING][5245] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0", GenerateName:"calico-kube-controllers-5c59f49bff-", Namespace:"calico-system", SelfLink:"", UID:"2a413923-7171-4cd9-86b6-66566674315f", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c59f49bff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8", Pod:"calico-kube-controllers-5c59f49bff-8gnf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18244db7ee1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.715 [INFO][5245] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.715 [INFO][5245] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" iface="eth0" netns="" Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.715 [INFO][5245] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.715 [INFO][5245] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.746 [INFO][5254] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" HandleID="k8s-pod-network.d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.746 [INFO][5254] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.746 [INFO][5254] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.753 [WARNING][5254] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" HandleID="k8s-pod-network.d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.753 [INFO][5254] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" HandleID="k8s-pod-network.d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.754 [INFO][5254] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:17.760096 containerd[1457]: 2026-04-21 10:23:17.756 [INFO][5245] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:23:17.760687 containerd[1457]: time="2026-04-21T10:23:17.760135680Z" level=info msg="TearDown network for sandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\" successfully" Apr 21 10:23:17.760687 containerd[1457]: time="2026-04-21T10:23:17.760166290Z" level=info msg="StopPodSandbox for \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\" returns successfully" Apr 21 10:23:17.760866 containerd[1457]: time="2026-04-21T10:23:17.760835871Z" level=info msg="RemovePodSandbox for \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\"" Apr 21 10:23:17.760896 containerd[1457]: time="2026-04-21T10:23:17.760874641Z" level=info msg="Forcibly stopping sandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\"" Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.803 [WARNING][5269] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0", GenerateName:"calico-kube-controllers-5c59f49bff-", Namespace:"calico-system", SelfLink:"", UID:"2a413923-7171-4cd9-86b6-66566674315f", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c59f49bff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"939e30c248142fc381774732a63b753fa64f06bc14c7576bbd5949c8778237c8", Pod:"calico-kube-controllers-5c59f49bff-8gnf6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali18244db7ee1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.804 [INFO][5269] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.804 [INFO][5269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" iface="eth0" netns="" Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.804 [INFO][5269] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.804 [INFO][5269] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.825 [INFO][5276] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" HandleID="k8s-pod-network.d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.826 [INFO][5276] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.826 [INFO][5276] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.833 [WARNING][5276] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" HandleID="k8s-pod-network.d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.833 [INFO][5276] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" HandleID="k8s-pod-network.d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Workload="172--234--196--117-k8s-calico--kube--controllers--5c59f49bff--8gnf6-eth0" Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.834 [INFO][5276] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:17.840092 containerd[1457]: 2026-04-21 10:23:17.837 [INFO][5269] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e" Apr 21 10:23:17.840809 containerd[1457]: time="2026-04-21T10:23:17.840023932Z" level=info msg="TearDown network for sandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\" successfully" Apr 21 10:23:17.844841 containerd[1457]: time="2026-04-21T10:23:17.844798540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:17.844901 containerd[1457]: time="2026-04-21T10:23:17.844882190Z" level=info msg="RemovePodSandbox \"d24f3725f68d4bf23db27b4689e25230dc2ed7cea68584ebf68bf762165c6b4e\" returns successfully" Apr 21 10:23:17.845577 containerd[1457]: time="2026-04-21T10:23:17.845556431Z" level=info msg="StopPodSandbox for \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\"" Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.884 [WARNING][5290] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4453451-982f-4725-af74-0e6e82cae9ec", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423", Pod:"coredns-66bc5c9577-gk6tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1421ecd26f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.884 [INFO][5290] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.884 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" iface="eth0" netns="" Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.884 [INFO][5290] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.884 [INFO][5290] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.907 [INFO][5297] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" HandleID="k8s-pod-network.8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.907 [INFO][5297] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.907 [INFO][5297] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.913 [WARNING][5297] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" HandleID="k8s-pod-network.8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.913 [INFO][5297] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" HandleID="k8s-pod-network.8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.915 [INFO][5297] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:17.920609 containerd[1457]: 2026-04-21 10:23:17.917 [INFO][5290] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:23:17.920609 containerd[1457]: time="2026-04-21T10:23:17.920360055Z" level=info msg="TearDown network for sandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\" successfully" Apr 21 10:23:17.920609 containerd[1457]: time="2026-04-21T10:23:17.920393095Z" level=info msg="StopPodSandbox for \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\" returns successfully" Apr 21 10:23:17.921964 containerd[1457]: time="2026-04-21T10:23:17.921935897Z" level=info msg="RemovePodSandbox for \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\"" Apr 21 10:23:17.922026 containerd[1457]: time="2026-04-21T10:23:17.921978278Z" level=info msg="Forcibly stopping sandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\"" Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.957 [WARNING][5311] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4453451-982f-4725-af74-0e6e82cae9ec", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"03093903b60fa9c1be86db9ac39938d644f55aa9786bff2d93bd02ed32a81423", Pod:"coredns-66bc5c9577-gk6tg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1421ecd26f8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.957 [INFO][5311] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.958 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" iface="eth0" netns="" Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.958 [INFO][5311] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.958 [INFO][5311] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.991 [INFO][5318] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" HandleID="k8s-pod-network.8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.991 [INFO][5318] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.991 [INFO][5318] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.997 [WARNING][5318] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" HandleID="k8s-pod-network.8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.997 [INFO][5318] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" HandleID="k8s-pod-network.8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Workload="172--234--196--117-k8s-coredns--66bc5c9577--gk6tg-eth0" Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:17.999 [INFO][5318] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.005444 containerd[1457]: 2026-04-21 10:23:18.002 [INFO][5311] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc" Apr 21 10:23:18.006480 containerd[1457]: time="2026-04-21T10:23:18.005518075Z" level=info msg="TearDown network for sandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\" successfully" Apr 21 10:23:18.009087 containerd[1457]: time="2026-04-21T10:23:18.009042560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:18.009194 containerd[1457]: time="2026-04-21T10:23:18.009140871Z" level=info msg="RemovePodSandbox \"8a0c604326cbf9e6ea041b7201efbd3000ed36705d454e0f3d3a6eed591939bc\" returns successfully" Apr 21 10:23:18.009956 containerd[1457]: time="2026-04-21T10:23:18.009928122Z" level=info msg="StopPodSandbox for \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\"" Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.051 [WARNING][5332] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bfea7c85-36f8-4c6c-8806-1d5305a3e058", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252", Pod:"coredns-66bc5c9577-hmkdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaee47bb1c9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.052 [INFO][5332] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.052 [INFO][5332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" iface="eth0" netns="" Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.052 [INFO][5332] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.052 [INFO][5332] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.075 [INFO][5339] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" HandleID="k8s-pod-network.1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.075 [INFO][5339] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.075 [INFO][5339] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.084 [WARNING][5339] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" HandleID="k8s-pod-network.1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.084 [INFO][5339] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" HandleID="k8s-pod-network.1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.085 [INFO][5339] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.095793 containerd[1457]: 2026-04-21 10:23:18.089 [INFO][5332] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:23:18.095793 containerd[1457]: time="2026-04-21T10:23:18.095431054Z" level=info msg="TearDown network for sandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\" successfully" Apr 21 10:23:18.095793 containerd[1457]: time="2026-04-21T10:23:18.095478904Z" level=info msg="StopPodSandbox for \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\" returns successfully" Apr 21 10:23:18.096979 containerd[1457]: time="2026-04-21T10:23:18.096373686Z" level=info msg="RemovePodSandbox for \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\"" Apr 21 10:23:18.096979 containerd[1457]: time="2026-04-21T10:23:18.096412996Z" level=info msg="Forcibly stopping sandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\"" Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.139 [WARNING][5353] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"bfea7c85-36f8-4c6c-8806-1d5305a3e058", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"9aa45cabe60a5ec6cfab29fa6ffffe5e2d5b86efaeb61dc4b1fe8bd25be6f252", Pod:"coredns-66bc5c9577-hmkdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaee47bb1c9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.139 [INFO][5353] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.139 [INFO][5353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" iface="eth0" netns="" Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.139 [INFO][5353] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.139 [INFO][5353] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.167 [INFO][5360] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" HandleID="k8s-pod-network.1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.167 [INFO][5360] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.167 [INFO][5360] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.179 [WARNING][5360] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" HandleID="k8s-pod-network.1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.179 [INFO][5360] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" HandleID="k8s-pod-network.1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Workload="172--234--196--117-k8s-coredns--66bc5c9577--hmkdz-eth0" Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.180 [INFO][5360] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.185831 containerd[1457]: 2026-04-21 10:23:18.183 [INFO][5353] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a" Apr 21 10:23:18.186762 containerd[1457]: time="2026-04-21T10:23:18.185889404Z" level=info msg="TearDown network for sandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\" successfully" Apr 21 10:23:18.189838 containerd[1457]: time="2026-04-21T10:23:18.189789880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:18.189943 containerd[1457]: time="2026-04-21T10:23:18.189889150Z" level=info msg="RemovePodSandbox \"1bcf355dbfb15233c188c3c97bb9fac6d4354445150c60e6920fd1d90756747a\" returns successfully" Apr 21 10:23:18.190933 containerd[1457]: time="2026-04-21T10:23:18.190796192Z" level=info msg="StopPodSandbox for \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\"" Apr 21 10:23:18.225352 kubelet[2564]: I0421 10:23:18.224245 2564 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.230 [WARNING][5374] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0", GenerateName:"calico-apiserver-7c885d8ccf-", Namespace:"calico-system", SelfLink:"", UID:"e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c885d8ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e", Pod:"calico-apiserver-7c885d8ccf-6qxjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1f7288f1c56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.231 [INFO][5374] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.231 [INFO][5374] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" iface="eth0" netns="" Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.231 [INFO][5374] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.231 [INFO][5374] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.283 [INFO][5382] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" HandleID="k8s-pod-network.993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.283 [INFO][5382] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.284 [INFO][5382] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.293 [WARNING][5382] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" HandleID="k8s-pod-network.993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.293 [INFO][5382] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" HandleID="k8s-pod-network.993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.295 [INFO][5382] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.309558 containerd[1457]: 2026-04-21 10:23:18.304 [INFO][5374] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:23:18.311014 containerd[1457]: time="2026-04-21T10:23:18.310360307Z" level=info msg="TearDown network for sandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\" successfully" Apr 21 10:23:18.311014 containerd[1457]: time="2026-04-21T10:23:18.310396637Z" level=info msg="StopPodSandbox for \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\" returns successfully" Apr 21 10:23:18.312162 containerd[1457]: time="2026-04-21T10:23:18.312014999Z" level=info msg="RemovePodSandbox for \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\"" Apr 21 10:23:18.312757 containerd[1457]: time="2026-04-21T10:23:18.312363750Z" level=info msg="Forcibly stopping sandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\"" Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.388 [WARNING][5397] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0", GenerateName:"calico-apiserver-7c885d8ccf-", Namespace:"calico-system", SelfLink:"", UID:"e20b56d9-ea94-4623-85b6-5cdc1ee6c0b0", ResourceVersion:"1161", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c885d8ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"524981b6cb1864ef91b04d8870e198ba1e12bdea7f70506f41307f26e53fbe7e", Pod:"calico-apiserver-7c885d8ccf-6qxjm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1f7288f1c56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.389 [INFO][5397] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.389 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" iface="eth0" netns="" Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.389 [INFO][5397] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.389 [INFO][5397] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.411 [INFO][5404] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" HandleID="k8s-pod-network.993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.411 [INFO][5404] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.411 [INFO][5404] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.418 [WARNING][5404] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" HandleID="k8s-pod-network.993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.418 [INFO][5404] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" HandleID="k8s-pod-network.993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--6qxjm-eth0" Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.419 [INFO][5404] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.425106 containerd[1457]: 2026-04-21 10:23:18.422 [INFO][5397] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae" Apr 21 10:23:18.425989 containerd[1457]: time="2026-04-21T10:23:18.425948346Z" level=info msg="TearDown network for sandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\" successfully" Apr 21 10:23:18.430665 containerd[1457]: time="2026-04-21T10:23:18.430624853Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:18.430721 containerd[1457]: time="2026-04-21T10:23:18.430697183Z" level=info msg="RemovePodSandbox \"993c814bd84aa38565d3acf1a2dc4c9a19d04b9ca642274da3bd29cdb48c90ae\" returns successfully" Apr 21 10:23:18.431314 containerd[1457]: time="2026-04-21T10:23:18.431277474Z" level=info msg="StopPodSandbox for \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\"" Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.469 [WARNING][5419] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"11bf43e9-9c35-41d9-833a-203a96bf4b43", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677", Pod:"goldmane-cccfbd5cf-sp5tn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1ed7f3576a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.469 [INFO][5419] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.469 [INFO][5419] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" iface="eth0" netns="" Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.470 [INFO][5419] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.470 [INFO][5419] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.493 [INFO][5426] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" HandleID="k8s-pod-network.d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.494 [INFO][5426] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.494 [INFO][5426] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.500 [WARNING][5426] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" HandleID="k8s-pod-network.d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.500 [INFO][5426] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" HandleID="k8s-pod-network.d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.501 [INFO][5426] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.507137 containerd[1457]: 2026-04-21 10:23:18.504 [INFO][5419] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:23:18.507137 containerd[1457]: time="2026-04-21T10:23:18.506943011Z" level=info msg="TearDown network for sandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\" successfully" Apr 21 10:23:18.507137 containerd[1457]: time="2026-04-21T10:23:18.506983461Z" level=info msg="StopPodSandbox for \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\" returns successfully" Apr 21 10:23:18.507732 containerd[1457]: time="2026-04-21T10:23:18.507700672Z" level=info msg="RemovePodSandbox for \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\"" Apr 21 10:23:18.507794 containerd[1457]: time="2026-04-21T10:23:18.507771113Z" level=info msg="Forcibly stopping sandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\"" Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.548 [WARNING][5441] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"11bf43e9-9c35-41d9-833a-203a96bf4b43", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"033f93db4b8cab44a237a789f9bd3665b770b33a71f0668dec9cbfe4534d8677", Pod:"goldmane-cccfbd5cf-sp5tn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.71.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1ed7f3576a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.548 [INFO][5441] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.548 [INFO][5441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" iface="eth0" netns="" Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.548 [INFO][5441] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.548 [INFO][5441] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.573 [INFO][5448] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" HandleID="k8s-pod-network.d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.573 [INFO][5448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.573 [INFO][5448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.579 [WARNING][5448] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" HandleID="k8s-pod-network.d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.579 [INFO][5448] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" HandleID="k8s-pod-network.d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Workload="172--234--196--117-k8s-goldmane--cccfbd5cf--sp5tn-eth0" Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.581 [INFO][5448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.587236 containerd[1457]: 2026-04-21 10:23:18.584 [INFO][5441] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da" Apr 21 10:23:18.587921 containerd[1457]: time="2026-04-21T10:23:18.587277516Z" level=info msg="TearDown network for sandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\" successfully" Apr 21 10:23:18.592409 containerd[1457]: time="2026-04-21T10:23:18.592377664Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:18.592479 containerd[1457]: time="2026-04-21T10:23:18.592466424Z" level=info msg="RemovePodSandbox \"d07d0caeb325386d025cfb77731387e1e181608483344a10fe99be611a7bf4da\" returns successfully" Apr 21 10:23:18.593746 containerd[1457]: time="2026-04-21T10:23:18.593682776Z" level=info msg="StopPodSandbox for \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\"" Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.641 [WARNING][5462] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" WorkloadEndpoint="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.641 [INFO][5462] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.641 [INFO][5462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" iface="eth0" netns="" Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.641 [INFO][5462] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.642 [INFO][5462] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.667 [INFO][5472] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" HandleID="k8s-pod-network.a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Workload="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.667 [INFO][5472] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.667 [INFO][5472] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.676 [WARNING][5472] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" HandleID="k8s-pod-network.a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Workload="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.676 [INFO][5472] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" HandleID="k8s-pod-network.a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Workload="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.678 [INFO][5472] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.686279 containerd[1457]: 2026-04-21 10:23:18.682 [INFO][5462] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:23:18.686279 containerd[1457]: time="2026-04-21T10:23:18.686255709Z" level=info msg="TearDown network for sandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\" successfully" Apr 21 10:23:18.687237 containerd[1457]: time="2026-04-21T10:23:18.686288709Z" level=info msg="StopPodSandbox for \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\" returns successfully" Apr 21 10:23:18.687879 containerd[1457]: time="2026-04-21T10:23:18.687856881Z" level=info msg="RemovePodSandbox for \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\"" Apr 21 10:23:18.687945 containerd[1457]: time="2026-04-21T10:23:18.687888541Z" level=info msg="Forcibly stopping sandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\"" Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.724 [WARNING][5486] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" WorkloadEndpoint="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.725 [INFO][5486] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.725 [INFO][5486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" iface="eth0" netns="" Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.725 [INFO][5486] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.725 [INFO][5486] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.748 [INFO][5493] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" HandleID="k8s-pod-network.a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Workload="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.749 [INFO][5493] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.749 [INFO][5493] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.755 [WARNING][5493] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" HandleID="k8s-pod-network.a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Workload="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.755 [INFO][5493] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" HandleID="k8s-pod-network.a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Workload="172--234--196--117-k8s-whisker--cd97785b7--rw8lm-eth0" Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.756 [INFO][5493] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.761805 containerd[1457]: 2026-04-21 10:23:18.759 [INFO][5486] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0" Apr 21 10:23:18.762480 containerd[1457]: time="2026-04-21T10:23:18.761883496Z" level=info msg="TearDown network for sandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\" successfully" Apr 21 10:23:18.766206 containerd[1457]: time="2026-04-21T10:23:18.766152652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:18.766266 containerd[1457]: time="2026-04-21T10:23:18.766239673Z" level=info msg="RemovePodSandbox \"a31181556ef2ba4abf049854d53d250b4272db71706c521262398547113ba3f0\" returns successfully" Apr 21 10:23:18.766838 containerd[1457]: time="2026-04-21T10:23:18.766815863Z" level=info msg="StopPodSandbox for \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\"" Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.805 [WARNING][5508] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0", GenerateName:"calico-apiserver-7c885d8ccf-", Namespace:"calico-system", SelfLink:"", UID:"103eb8aa-b6c6-480d-a370-786769ae65a2", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c885d8ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690", Pod:"calico-apiserver-7c885d8ccf-njvhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3425be06179", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.806 [INFO][5508] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.806 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" iface="eth0" netns="" Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.806 [INFO][5508] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.806 [INFO][5508] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.825 [INFO][5516] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" HandleID="k8s-pod-network.dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.825 [INFO][5516] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.825 [INFO][5516] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.832 [WARNING][5516] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" HandleID="k8s-pod-network.dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.832 [INFO][5516] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" HandleID="k8s-pod-network.dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.833 [INFO][5516] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.839295 containerd[1457]: 2026-04-21 10:23:18.836 [INFO][5508] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:23:18.840245 containerd[1457]: time="2026-04-21T10:23:18.839366456Z" level=info msg="TearDown network for sandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\" successfully" Apr 21 10:23:18.840245 containerd[1457]: time="2026-04-21T10:23:18.839411526Z" level=info msg="StopPodSandbox for \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\" returns successfully" Apr 21 10:23:18.840424 containerd[1457]: time="2026-04-21T10:23:18.840403977Z" level=info msg="RemovePodSandbox for \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\"" Apr 21 10:23:18.840474 containerd[1457]: time="2026-04-21T10:23:18.840459177Z" level=info msg="Forcibly stopping sandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\"" Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.878 [WARNING][5531] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0", GenerateName:"calico-apiserver-7c885d8ccf-", Namespace:"calico-system", SelfLink:"", UID:"103eb8aa-b6c6-480d-a370-786769ae65a2", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c885d8ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-196-117", ContainerID:"cfbb76ee809e13bf68eb6d6fc0f43f575bc45e18e232975463e43db8124e7690", Pod:"calico-apiserver-7c885d8ccf-njvhb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3425be06179", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.878 [INFO][5531] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.878 [INFO][5531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" iface="eth0" netns="" Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.878 [INFO][5531] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.878 [INFO][5531] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.899 [INFO][5539] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" HandleID="k8s-pod-network.dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.899 [INFO][5539] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.899 [INFO][5539] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.905 [WARNING][5539] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" HandleID="k8s-pod-network.dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.905 [INFO][5539] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" HandleID="k8s-pod-network.dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Workload="172--234--196--117-k8s-calico--apiserver--7c885d8ccf--njvhb-eth0" Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.906 [INFO][5539] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.911472 containerd[1457]: 2026-04-21 10:23:18.908 [INFO][5531] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be" Apr 21 10:23:18.912230 containerd[1457]: time="2026-04-21T10:23:18.911522127Z" level=info msg="TearDown network for sandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\" successfully" Apr 21 10:23:18.915593 containerd[1457]: time="2026-04-21T10:23:18.915551464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:18.915648 containerd[1457]: time="2026-04-21T10:23:18.915625704Z" level=info msg="RemovePodSandbox \"dbdd8516d86102c47ce71b6ec17d6abbcceccd01bd1e5fffebb49152b4e081be\" returns successfully" Apr 21 10:23:28.678073 kubelet[2564]: E0421 10:23:28.677070 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:23:33.387961 systemd[1]: run-containerd-runc-k8s.io-6c65f23675807fc73990e87d3f1e290dbd59383523af75f8feab6fa5596414f6-runc.RiT4xT.mount: Deactivated successfully. Apr 21 10:23:39.697423 systemd[1]: run-containerd-runc-k8s.io-6c65f23675807fc73990e87d3f1e290dbd59383523af75f8feab6fa5596414f6-runc.MbTgCT.mount: Deactivated successfully. Apr 21 10:23:42.677487 kubelet[2564]: E0421 10:23:42.677453 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:23:46.677560 kubelet[2564]: E0421 10:23:46.677516 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:23:50.677230 kubelet[2564]: E0421 10:23:50.677198 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:24:03.677094 kubelet[2564]: E0421 10:24:03.676931 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:24:07.679593 kubelet[2564]: E0421 10:24:07.679475 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:24:15.677723 kubelet[2564]: E0421 10:24:15.677392 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:24:34.677316 kubelet[2564]: E0421 10:24:34.677257 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:24:35.605461 systemd[1]: run-containerd-runc-k8s.io-ebc8b8f3a7ee90733b3cb1575a9d048f88387769a1d347fb2fc475ac158cd7d2-runc.nfksc4.mount: Deactivated successfully. Apr 21 10:24:50.677022 kubelet[2564]: E0421 10:24:50.676932 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:25:01.678120 kubelet[2564]: E0421 10:25:01.677627 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:25:10.460738 systemd[1]: Started sshd@7-172.234.196.117:22-50.85.169.122:57104.service - OpenSSH per-connection server daemon (50.85.169.122:57104). Apr 21 10:25:11.072440 sshd[5948]: Accepted publickey for core from 50.85.169.122 port 57104 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:11.073295 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:11.078492 systemd-logind[1447]: New session 8 of user core. Apr 21 10:25:11.084243 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:25:11.576796 sshd[5948]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:11.581708 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:25:11.582499 systemd[1]: sshd@7-172.234.196.117:22-50.85.169.122:57104.service: Deactivated successfully. Apr 21 10:25:11.584865 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:25:11.585733 systemd-logind[1447]: Removed session 8. Apr 21 10:25:16.689312 systemd[1]: Started sshd@8-172.234.196.117:22-50.85.169.122:57118.service - OpenSSH per-connection server daemon (50.85.169.122:57118). Apr 21 10:25:17.286284 sshd[5962]: Accepted publickey for core from 50.85.169.122 port 57118 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:17.287836 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:17.292229 systemd-logind[1447]: New session 9 of user core. Apr 21 10:25:17.295163 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:25:17.775812 sshd[5962]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:17.779998 systemd[1]: sshd@8-172.234.196.117:22-50.85.169.122:57118.service: Deactivated successfully. Apr 21 10:25:17.782441 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:25:17.784745 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:25:17.785669 systemd-logind[1447]: Removed session 9. Apr 21 10:25:18.678037 kubelet[2564]: E0421 10:25:18.677682 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:25:21.677820 kubelet[2564]: E0421 10:25:21.677084 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:25:22.887299 systemd[1]: Started sshd@9-172.234.196.117:22-50.85.169.122:53314.service - OpenSSH per-connection server daemon (50.85.169.122:53314). Apr 21 10:25:23.486786 sshd[5978]: Accepted publickey for core from 50.85.169.122 port 53314 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:23.488537 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:23.492781 systemd-logind[1447]: New session 10 of user core. Apr 21 10:25:23.498206 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:25:23.965653 sshd[5978]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:23.969549 systemd[1]: sshd@9-172.234.196.117:22-50.85.169.122:53314.service: Deactivated successfully. Apr 21 10:25:23.972922 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:25:23.974567 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:25:23.976012 systemd-logind[1447]: Removed session 10. Apr 21 10:25:25.677639 kubelet[2564]: E0421 10:25:25.676794 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:25:29.075229 systemd[1]: Started sshd@10-172.234.196.117:22-50.85.169.122:53324.service - OpenSSH per-connection server daemon (50.85.169.122:53324). Apr 21 10:25:29.699634 sshd[6015]: Accepted publickey for core from 50.85.169.122 port 53324 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:29.700254 sshd[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:29.704719 systemd-logind[1447]: New session 11 of user core. Apr 21 10:25:29.711179 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:25:30.213486 sshd[6015]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:30.218347 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:25:30.219195 systemd[1]: sshd@10-172.234.196.117:22-50.85.169.122:53324.service: Deactivated successfully. Apr 21 10:25:30.222597 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:25:30.223722 systemd-logind[1447]: Removed session 11. Apr 21 10:25:33.390812 systemd[1]: run-containerd-runc-k8s.io-6c65f23675807fc73990e87d3f1e290dbd59383523af75f8feab6fa5596414f6-runc.uIQ6Ph.mount: Deactivated successfully. Apr 21 10:25:33.677370 kubelet[2564]: E0421 10:25:33.677263 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:25:35.326291 systemd[1]: Started sshd@11-172.234.196.117:22-50.85.169.122:56678.service - OpenSSH per-connection server daemon (50.85.169.122:56678). Apr 21 10:25:35.929453 sshd[6065]: Accepted publickey for core from 50.85.169.122 port 56678 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:35.930068 sshd[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:35.934517 systemd-logind[1447]: New session 12 of user core. Apr 21 10:25:35.940174 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:25:36.417949 sshd[6065]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:36.421039 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:25:36.422369 systemd[1]: sshd@11-172.234.196.117:22-50.85.169.122:56678.service: Deactivated successfully. Apr 21 10:25:36.424753 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:25:36.425979 systemd-logind[1447]: Removed session 12. Apr 21 10:25:39.698046 systemd[1]: run-containerd-runc-k8s.io-6c65f23675807fc73990e87d3f1e290dbd59383523af75f8feab6fa5596414f6-runc.TJHeGl.mount: Deactivated successfully. Apr 21 10:25:41.535259 systemd[1]: Started sshd@12-172.234.196.117:22-50.85.169.122:56068.service - OpenSSH per-connection server daemon (50.85.169.122:56068). Apr 21 10:25:42.160647 sshd[6138]: Accepted publickey for core from 50.85.169.122 port 56068 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:42.163447 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:42.172901 systemd-logind[1447]: New session 13 of user core. Apr 21 10:25:42.178406 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:25:42.673919 sshd[6138]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:42.678199 systemd[1]: sshd@12-172.234.196.117:22-50.85.169.122:56068.service: Deactivated successfully. Apr 21 10:25:42.680287 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:25:42.681238 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:25:42.682417 systemd-logind[1447]: Removed session 13. Apr 21 10:25:42.782888 systemd[1]: Started sshd@13-172.234.196.117:22-50.85.169.122:56084.service - OpenSSH per-connection server daemon (50.85.169.122:56084). Apr 21 10:25:43.415633 sshd[6152]: Accepted publickey for core from 50.85.169.122 port 56084 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:43.417600 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:43.422568 systemd-logind[1447]: New session 14 of user core. Apr 21 10:25:43.428184 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:25:43.953966 sshd[6152]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:43.958249 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:25:43.959181 systemd[1]: sshd@13-172.234.196.117:22-50.85.169.122:56084.service: Deactivated successfully. Apr 21 10:25:43.961244 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:25:43.962094 systemd-logind[1447]: Removed session 14. Apr 21 10:25:44.071482 systemd[1]: Started sshd@14-172.234.196.117:22-50.85.169.122:56088.service - OpenSSH per-connection server daemon (50.85.169.122:56088). Apr 21 10:25:44.697138 sshd[6163]: Accepted publickey for core from 50.85.169.122 port 56088 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:44.698770 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:44.704135 systemd-logind[1447]: New session 15 of user core. Apr 21 10:25:44.707188 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:25:45.206931 sshd[6163]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:45.211161 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:25:45.211986 systemd[1]: sshd@14-172.234.196.117:22-50.85.169.122:56088.service: Deactivated successfully. Apr 21 10:25:45.214259 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:25:45.215219 systemd-logind[1447]: Removed session 15. Apr 21 10:25:50.322524 systemd[1]: Started sshd@15-172.234.196.117:22-50.85.169.122:37036.service - OpenSSH per-connection server daemon (50.85.169.122:37036). Apr 21 10:25:50.938994 sshd[6176]: Accepted publickey for core from 50.85.169.122 port 37036 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:50.939881 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:50.944122 systemd-logind[1447]: New session 16 of user core. Apr 21 10:25:50.948175 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:25:51.433529 sshd[6176]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:51.440689 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:25:51.441901 systemd[1]: sshd@15-172.234.196.117:22-50.85.169.122:37036.service: Deactivated successfully. Apr 21 10:25:51.445647 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:25:51.447131 systemd-logind[1447]: Removed session 16. Apr 21 10:25:55.678725 kubelet[2564]: E0421 10:25:55.677960 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:25:56.548324 systemd[1]: Started sshd@16-172.234.196.117:22-50.85.169.122:37042.service - OpenSSH per-connection server daemon (50.85.169.122:37042). Apr 21 10:25:57.140913 sshd[6212]: Accepted publickey for core from 50.85.169.122 port 37042 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:57.141575 sshd[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:57.145994 systemd-logind[1447]: New session 17 of user core. Apr 21 10:25:57.151179 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:25:57.628034 sshd[6212]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:57.632753 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:25:57.634331 systemd[1]: sshd@16-172.234.196.117:22-50.85.169.122:37042.service: Deactivated successfully. Apr 21 10:25:57.636736 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:25:57.637578 systemd-logind[1447]: Removed session 17. Apr 21 10:25:57.740285 systemd[1]: Started sshd@17-172.234.196.117:22-50.85.169.122:37056.service - OpenSSH per-connection server daemon (50.85.169.122:37056). Apr 21 10:25:58.372407 sshd[6237]: Accepted publickey for core from 50.85.169.122 port 37056 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:58.373088 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:58.377475 systemd-logind[1447]: New session 18 of user core. Apr 21 10:25:58.386219 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:25:59.039876 sshd[6237]: pam_unix(sshd:session): session closed for user core Apr 21 10:25:59.043384 systemd[1]: sshd@17-172.234.196.117:22-50.85.169.122:37056.service: Deactivated successfully. Apr 21 10:25:59.045555 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:25:59.047211 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:25:59.048330 systemd-logind[1447]: Removed session 18. Apr 21 10:25:59.144778 systemd[1]: Started sshd@18-172.234.196.117:22-50.85.169.122:37058.service - OpenSSH per-connection server daemon (50.85.169.122:37058). Apr 21 10:25:59.751876 sshd[6248]: Accepted publickey for core from 50.85.169.122 port 37058 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:25:59.753727 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:25:59.758725 systemd-logind[1447]: New session 19 of user core. Apr 21 10:25:59.763233 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:26:00.745130 sshd[6248]: pam_unix(sshd:session): session closed for user core Apr 21 10:26:00.749621 systemd[1]: sshd@18-172.234.196.117:22-50.85.169.122:37058.service: Deactivated successfully. Apr 21 10:26:00.751649 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:26:00.754078 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:26:00.754954 systemd-logind[1447]: Removed session 19. Apr 21 10:26:00.857434 systemd[1]: Started sshd@19-172.234.196.117:22-50.85.169.122:41986.service - OpenSSH per-connection server daemon (50.85.169.122:41986). Apr 21 10:26:01.491623 sshd[6295]: Accepted publickey for core from 50.85.169.122 port 41986 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:26:01.493903 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:26:01.500106 systemd-logind[1447]: New session 20 of user core. Apr 21 10:26:01.507241 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:26:02.111634 sshd[6295]: pam_unix(sshd:session): session closed for user core Apr 21 10:26:02.115587 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:26:02.116379 systemd[1]: sshd@19-172.234.196.117:22-50.85.169.122:41986.service: Deactivated successfully. Apr 21 10:26:02.118176 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:26:02.119475 systemd-logind[1447]: Removed session 20. Apr 21 10:26:02.217042 systemd[1]: Started sshd@20-172.234.196.117:22-50.85.169.122:42000.service - OpenSSH per-connection server daemon (50.85.169.122:42000). Apr 21 10:26:02.824488 sshd[6309]: Accepted publickey for core from 50.85.169.122 port 42000 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:26:02.825757 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:26:02.830921 systemd-logind[1447]: New session 21 of user core. Apr 21 10:26:02.838183 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:26:03.318297 sshd[6309]: pam_unix(sshd:session): session closed for user core Apr 21 10:26:03.321636 systemd[1]: sshd@20-172.234.196.117:22-50.85.169.122:42000.service: Deactivated successfully. Apr 21 10:26:03.323868 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:26:03.325838 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:26:03.327268 systemd-logind[1447]: Removed session 21. Apr 21 10:26:08.432315 systemd[1]: Started sshd@21-172.234.196.117:22-50.85.169.122:42006.service - OpenSSH per-connection server daemon (50.85.169.122:42006). Apr 21 10:26:08.677174 kubelet[2564]: E0421 10:26:08.677137 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:26:09.050672 sshd[6381]: Accepted publickey for core from 50.85.169.122 port 42006 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:26:09.052325 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:26:09.056676 systemd-logind[1447]: New session 22 of user core. Apr 21 10:26:09.063165 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:26:09.548630 sshd[6381]: pam_unix(sshd:session): session closed for user core Apr 21 10:26:09.553431 systemd[1]: sshd@21-172.234.196.117:22-50.85.169.122:42006.service: Deactivated successfully. Apr 21 10:26:09.553830 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:26:09.556032 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:26:09.557393 systemd-logind[1447]: Removed session 22. Apr 21 10:26:09.678117 kubelet[2564]: E0421 10:26:09.676879 2564 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:26:14.663329 systemd[1]: Started sshd@22-172.234.196.117:22-50.85.169.122:37194.service - OpenSSH per-connection server daemon (50.85.169.122:37194). Apr 21 10:26:15.289299 sshd[6396]: Accepted publickey for core from 50.85.169.122 port 37194 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:26:15.290211 sshd[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:26:15.297638 systemd-logind[1447]: New session 23 of user core. Apr 21 10:26:15.299238 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 10:26:15.818289 sshd[6396]: pam_unix(sshd:session): session closed for user core Apr 21 10:26:15.824611 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Apr 21 10:26:15.826141 systemd[1]: sshd@22-172.234.196.117:22-50.85.169.122:37194.service: Deactivated successfully. Apr 21 10:26:15.829464 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 10:26:15.830697 systemd-logind[1447]: Removed session 23. Apr 21 10:26:20.931436 systemd[1]: Started sshd@23-172.234.196.117:22-50.85.169.122:39528.service - OpenSSH per-connection server daemon (50.85.169.122:39528). Apr 21 10:26:21.557279 sshd[6410]: Accepted publickey for core from 50.85.169.122 port 39528 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:26:21.558023 sshd[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:26:21.562893 systemd-logind[1447]: New session 24 of user core. Apr 21 10:26:21.571202 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 10:26:22.060682 sshd[6410]: pam_unix(sshd:session): session closed for user core Apr 21 10:26:22.064371 systemd[1]: sshd@23-172.234.196.117:22-50.85.169.122:39528.service: Deactivated successfully. Apr 21 10:26:22.066870 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 10:26:22.068329 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Apr 21 10:26:22.070308 systemd-logind[1447]: Removed session 24.