Apr 13 20:09:05.007759 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:09:05.007779 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:05.007788 kernel: BIOS-provided physical RAM map: Apr 13 20:09:05.007794 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 13 20:09:05.007799 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 13 20:09:05.007808 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 13 20:09:05.007814 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 13 20:09:05.007820 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 13 20:09:05.007826 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 20:09:05.007832 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 13 20:09:05.007838 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:09:05.007844 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 13 20:09:05.007850 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 13 20:09:05.007858 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 13 20:09:05.007865 kernel: NX (Execute Disable) protection: active Apr 13 20:09:05.007871 kernel: APIC: Static calls initialized Apr 13 20:09:05.007877 kernel: SMBIOS 2.8 present. Apr 13 20:09:05.007884 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 13 20:09:05.007890 kernel: Hypervisor detected: KVM Apr 13 20:09:05.007898 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:09:05.007905 kernel: kvm-clock: using sched offset of 5771616180 cycles Apr 13 20:09:05.007911 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:09:05.007918 kernel: tsc: Detected 2000.000 MHz processor Apr 13 20:09:05.007924 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:09:05.007931 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:09:05.007937 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 13 20:09:05.007944 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 13 20:09:05.007950 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:09:05.007959 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 13 20:09:05.007965 kernel: Using GB pages for direct mapping Apr 13 20:09:05.007971 kernel: ACPI: Early table checksum verification disabled Apr 13 20:09:05.007977 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 13 20:09:05.007984 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.007990 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.007996 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.008003 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 13 20:09:05.008009 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.008018 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.008024 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.008030 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.008040 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 13 20:09:05.008047 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 13 20:09:05.008053 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 13 20:09:05.008063 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 13 20:09:05.008069 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 13 20:09:05.008076 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 13 20:09:05.008082 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 13 20:09:05.008089 kernel: No NUMA configuration found Apr 13 20:09:05.008095 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 13 20:09:05.008102 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Apr 13 20:09:05.008109 kernel: Zone ranges: Apr 13 20:09:05.008118 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:09:05.008124 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:09:05.008131 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:09:05.008138 kernel: Movable zone start for each node Apr 13 20:09:05.008145 kernel: Early memory node ranges Apr 13 20:09:05.008151 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 13 20:09:05.008158 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 13 20:09:05.008736 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:09:05.008746 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 13 20:09:05.008753 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:09:05.008764 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 13 20:09:05.009096 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 13 20:09:05.009108 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:09:05.009115 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:09:05.009122 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:09:05.009129 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:09:05.009136 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:09:05.009142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:09:05.009149 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:09:05.009160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:09:05.009166 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:09:05.009173 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:09:05.009180 kernel: TSC deadline timer available Apr 13 20:09:05.009187 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:09:05.009194 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:09:05.009200 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 20:09:05.009207 kernel: kvm-guest: setup PV sched yield Apr 13 20:09:05.009214 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 13 20:09:05.009223 kernel: Booting paravirtualized kernel on KVM Apr 13 20:09:05.009230 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:09:05.009237 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:09:05.009805 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:09:05.009813 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:09:05.009819 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:09:05.009826 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:09:05.009833 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:09:05.009841 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:05.009852 kernel: random: crng init done Apr 13 20:09:05.009859 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:09:05.009866 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:09:05.009872 kernel: Fallback order for Node 0: 0 Apr 13 20:09:05.009880 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 13 20:09:05.009887 kernel: Policy zone: Normal Apr 13 20:09:05.009893 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:09:05.009900 kernel: software IO TLB: area num 2. Apr 13 20:09:05.009910 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 227300K reserved, 0K cma-reserved) Apr 13 20:09:05.009917 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:09:05.009924 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:09:05.009930 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:09:05.009937 kernel: Dynamic Preempt: voluntary Apr 13 20:09:05.009944 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:09:05.009951 kernel: rcu: RCU event tracing is enabled. Apr 13 20:09:05.009959 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:09:05.009966 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:09:05.009975 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:09:05.009982 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:09:05.009989 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:09:05.009996 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:09:05.010003 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:09:05.010010 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:09:05.010016 kernel: Console: colour VGA+ 80x25 Apr 13 20:09:05.010023 kernel: printk: console [tty0] enabled Apr 13 20:09:05.010030 kernel: printk: console [ttyS0] enabled Apr 13 20:09:05.010037 kernel: ACPI: Core revision 20230628 Apr 13 20:09:05.010046 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 20:09:05.010053 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:09:05.010060 kernel: x2apic enabled Apr 13 20:09:05.010074 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:09:05.010084 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 20:09:05.010091 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 20:09:05.010098 kernel: kvm-guest: setup PV IPIs Apr 13 20:09:05.010106 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 20:09:05.010113 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 13 20:09:05.010120 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Apr 13 20:09:05.010127 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:09:05.010137 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 13 20:09:05.010144 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 13 20:09:05.010151 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:09:05.010159 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 20:09:05.010166 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:09:05.010175 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 13 20:09:05.010183 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:09:05.010190 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:09:05.010197 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 13 20:09:05.010205 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 13 20:09:05.010212 kernel: active return thunk: srso_alias_return_thunk Apr 13 20:09:05.010219 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 13 20:09:05.010226 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 13 20:09:05.010236 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:09:05.010254 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:09:05.010261 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:09:05.010268 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:09:05.010276 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:09:05.010284 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:09:05.010291 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 13 20:09:05.010298 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 13 20:09:05.010305 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:09:05.010315 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:09:05.010322 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:09:05.010330 kernel: landlock: Up and running. Apr 13 20:09:05.010337 kernel: SELinux: Initializing. Apr 13 20:09:05.010344 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:09:05.010351 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:09:05.010358 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 13 20:09:05.010366 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:05.010373 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:05.010383 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:05.010390 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 13 20:09:05.010397 kernel: ... version: 0 Apr 13 20:09:05.010404 kernel: ... bit width: 48 Apr 13 20:09:05.010411 kernel: ... generic registers: 6 Apr 13 20:09:05.010418 kernel: ... value mask: 0000ffffffffffff Apr 13 20:09:05.010426 kernel: ... max period: 00007fffffffffff Apr 13 20:09:05.010433 kernel: ... fixed-purpose events: 0 Apr 13 20:09:05.010440 kernel: ... event mask: 000000000000003f Apr 13 20:09:05.010450 kernel: signal: max sigframe size: 3376 Apr 13 20:09:05.010457 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:09:05.010464 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:09:05.010471 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:09:05.010478 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:09:05.010486 kernel: .... node #0, CPUs: #1 Apr 13 20:09:05.010493 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:09:05.010500 kernel: smpboot: Max logical packages: 1 Apr 13 20:09:05.010507 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 13 20:09:05.010516 kernel: devtmpfs: initialized Apr 13 20:09:05.010524 kernel: x86/mm: Memory block size: 128MB Apr 13 20:09:05.010531 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:09:05.010538 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:09:05.010545 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:09:05.010553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:09:05.010560 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:09:05.010567 kernel: audit: type=2000 audit(1776110944.663:1): state=initialized audit_enabled=0 res=1 Apr 13 20:09:05.010574 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:09:05.010583 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:09:05.010591 kernel: cpuidle: using governor menu Apr 13 20:09:05.010598 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:09:05.010605 kernel: dca service started, version 1.12.1 Apr 13 20:09:05.010612 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 20:09:05.010619 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 20:09:05.010627 kernel: PCI: Using configuration type 1 for base access Apr 13 20:09:05.010634 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:09:05.010641 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:09:05.010651 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:09:05.010658 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:09:05.010665 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:09:05.010672 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:09:05.010679 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:09:05.010687 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:09:05.010694 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:09:05.010701 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:09:05.010708 kernel: ACPI: Interpreter enabled Apr 13 20:09:05.010718 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:09:05.010725 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:09:05.010732 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:09:05.010739 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:09:05.010746 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:09:05.010753 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:09:05.010935 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:09:05.011081 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 20:09:05.011214 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 20:09:05.011224 kernel: PCI host bridge to bus 0000:00 Apr 13 20:09:05.016587 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:09:05.016714 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:09:05.016833 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:09:05.016949 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 13 20:09:05.017064 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 20:09:05.017186 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 13 20:09:05.017319 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:09:05.017464 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:09:05.017601 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 20:09:05.017727 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 13 20:09:05.018017 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 13 20:09:05.018147 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 13 20:09:05.021209 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:09:05.021396 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 13 20:09:05.021528 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:09:05.021654 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 13 20:09:05.021778 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 13 20:09:05.021911 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:09:05.022043 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:09:05.022175 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 13 20:09:05.023353 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 13 20:09:05.023485 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 13 20:09:05.023620 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:09:05.023751 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:09:05.023892 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:09:05.024022 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 13 20:09:05.024145 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 13 20:09:05.026285 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:09:05.026422 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 13 20:09:05.026433 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:09:05.026441 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:09:05.026448 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:09:05.026460 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:09:05.026467 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:09:05.026475 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:09:05.026482 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:09:05.026489 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:09:05.026496 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:09:05.026503 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:09:05.026511 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:09:05.026518 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:09:05.026528 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:09:05.026535 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:09:05.026542 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:09:05.026549 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:09:05.026557 kernel: iommu: Default domain type: Translated Apr 13 20:09:05.026564 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:09:05.026571 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:09:05.026579 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:09:05.026586 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 13 20:09:05.026595 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 13 20:09:05.026721 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:09:05.026844 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:09:05.026967 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:09:05.026976 kernel: vgaarb: loaded Apr 13 20:09:05.026984 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 20:09:05.026991 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 20:09:05.026999 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:09:05.027010 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:09:05.027017 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:09:05.027024 kernel: pnp: PnP ACPI init Apr 13 20:09:05.027163 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 20:09:05.027175 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:09:05.027183 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:09:05.027191 kernel: NET: Registered PF_INET protocol family Apr 13 20:09:05.027199 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:09:05.027210 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 20:09:05.027217 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:09:05.027225 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:09:05.027233 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 20:09:05.027254 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 20:09:05.027262 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:09:05.027270 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:09:05.027277 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:09:05.027285 kernel: NET: Registered PF_XDP protocol family Apr 13 20:09:05.027418 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:09:05.027535 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:09:05.027649 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:09:05.027764 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 13 20:09:05.027878 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 20:09:05.028013 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 13 20:09:05.028025 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:09:05.028033 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:09:05.028041 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 13 20:09:05.028054 kernel: Initialise system trusted keyrings Apr 13 20:09:05.028062 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 20:09:05.028070 kernel: Key type asymmetric registered Apr 13 20:09:05.028077 kernel: Asymmetric key parser 'x509' registered Apr 13 20:09:05.028085 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:09:05.028092 kernel: io scheduler mq-deadline registered Apr 13 20:09:05.028100 kernel: io scheduler kyber registered Apr 13 20:09:05.028108 kernel: io scheduler bfq registered Apr 13 20:09:05.028115 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:09:05.028126 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:09:05.028134 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:09:05.028141 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:09:05.028149 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:09:05.028157 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:09:05.028165 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:09:05.028172 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:09:05.029549 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:09:05.029568 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:09:05.029692 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:09:05.029813 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:09:04 UTC (1776110944) Apr 13 20:09:05.029931 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 20:09:05.029942 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 13 20:09:05.029950 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:09:05.029958 kernel: Segment Routing with IPv6 Apr 13 20:09:05.029966 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:09:05.029974 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:09:05.029986 kernel: Key type dns_resolver registered Apr 13 20:09:05.029994 kernel: IPI shorthand broadcast: enabled Apr 13 20:09:05.030002 kernel: sched_clock: Marking stable (914005310, 330696020)->(1391918180, -147216850) Apr 13 20:09:05.030010 kernel: registered taskstats version 1 Apr 13 20:09:05.030018 kernel: Loading compiled-in X.509 certificates Apr 13 20:09:05.030026 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:09:05.030034 kernel: Key type .fscrypt registered Apr 13 20:09:05.030043 kernel: Key type fscrypt-provisioning registered Apr 13 20:09:05.030051 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:09:05.030061 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:09:05.030069 kernel: ima: No architecture policies found Apr 13 20:09:05.030077 kernel: clk: Disabling unused clocks Apr 13 20:09:05.030085 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:09:05.030094 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:09:05.030102 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:09:05.030110 kernel: Run /init as init process Apr 13 20:09:05.030118 kernel: with arguments: Apr 13 20:09:05.030128 kernel: /init Apr 13 20:09:05.030136 kernel: with environment: Apr 13 20:09:05.030144 kernel: HOME=/ Apr 13 20:09:05.030152 kernel: TERM=linux Apr 13 20:09:05.030162 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:09:05.030172 systemd[1]: Detected virtualization kvm. Apr 13 20:09:05.030181 systemd[1]: Detected architecture x86-64. Apr 13 20:09:05.030189 systemd[1]: Running in initrd. Apr 13 20:09:05.030200 systemd[1]: No hostname configured, using default hostname. Apr 13 20:09:05.030208 systemd[1]: Hostname set to . Apr 13 20:09:05.030217 systemd[1]: Initializing machine ID from random generator. Apr 13 20:09:05.030226 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:09:05.030235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:09:05.031450 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:09:05.031465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:09:05.031474 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:09:05.031483 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:09:05.031492 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:09:05.031503 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:09:05.031512 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:09:05.031523 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:09:05.031532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:09:05.031541 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:09:05.031549 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:09:05.031558 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:09:05.031567 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:09:05.031576 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:09:05.031585 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:09:05.031594 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:09:05.031605 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:09:05.031614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:09:05.031623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:09:05.031632 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:09:05.031641 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:09:05.031649 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:09:05.031658 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:09:05.031667 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:09:05.031676 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:09:05.031687 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:09:05.031696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:09:05.031725 systemd-journald[178]: Collecting audit messages is disabled. Apr 13 20:09:05.031744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:05.031756 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:09:05.031768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:09:05.031777 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:09:05.031789 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:09:05.031799 systemd-journald[178]: Journal started Apr 13 20:09:05.031817 systemd-journald[178]: Runtime Journal (/run/log/journal/e2bd1be3df4844f0a4500417c94e153c) is 8.0M, max 78.3M, 70.3M free. Apr 13 20:09:05.036373 systemd-modules-load[179]: Inserted module 'overlay' Apr 13 20:09:05.129656 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:09:05.129678 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:09:05.129693 kernel: Bridge firewalling registered Apr 13 20:09:05.070119 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 13 20:09:05.131851 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:09:05.132814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:05.134677 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:09:05.141369 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:05.144750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:09:05.149368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:09:05.155962 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:09:05.179973 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:05.188569 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:09:05.192894 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:09:05.199289 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:09:05.202534 dracut-cmdline[210]: dracut-dracut-053 Apr 13 20:09:05.204884 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:05.207144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:09:05.216419 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:09:05.246416 systemd-resolved[232]: Positive Trust Anchors: Apr 13 20:09:05.246429 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:09:05.246455 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:09:05.253032 systemd-resolved[232]: Defaulting to hostname 'linux'. Apr 13 20:09:05.254058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:09:05.255299 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:09:05.281259 kernel: SCSI subsystem initialized Apr 13 20:09:05.290261 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:09:05.301268 kernel: iscsi: registered transport (tcp) Apr 13 20:09:05.323106 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:09:05.323184 kernel: QLogic iSCSI HBA Driver Apr 13 20:09:05.380305 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:09:05.386551 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:09:05.427648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:09:05.427692 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:09:05.427720 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:09:05.475281 kernel: raid6: avx2x4 gen() 28782 MB/s Apr 13 20:09:05.494272 kernel: raid6: avx2x2 gen() 23929 MB/s Apr 13 20:09:05.512647 kernel: raid6: avx2x1 gen() 18052 MB/s Apr 13 20:09:05.512687 kernel: raid6: using algorithm avx2x4 gen() 28782 MB/s Apr 13 20:09:05.535775 kernel: raid6: .... xor() 4020 MB/s, rmw enabled Apr 13 20:09:05.535847 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:09:05.559276 kernel: xor: automatically using best checksumming function avx Apr 13 20:09:05.703288 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:09:05.715589 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:09:05.725457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:09:05.737740 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 13 20:09:05.743020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:09:05.750364 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:09:05.767103 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Apr 13 20:09:05.800680 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:09:05.805388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:09:05.880034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:09:05.889412 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:09:05.907564 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:09:05.914111 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:09:05.915875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:09:05.917325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:09:05.924635 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:09:05.949756 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:09:05.969528 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:09:05.984305 kernel: libata version 3.00 loaded. Apr 13 20:09:05.990270 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:09:05.994258 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 20:09:06.000279 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:09:06.171918 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:06.174649 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:06.175968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:09:06.176152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:06.179261 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:06.224490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:06.235689 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:09:06.235964 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:09:06.251285 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:09:06.251354 kernel: AES CTR mode by8 optimization enabled Apr 13 20:09:06.256988 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:09:06.257322 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:09:06.263292 kernel: scsi host1: ahci Apr 13 20:09:06.263504 kernel: scsi host2: ahci Apr 13 20:09:06.264675 kernel: scsi host3: ahci Apr 13 20:09:06.270051 kernel: scsi host4: ahci Apr 13 20:09:06.273431 kernel: scsi host5: ahci Apr 13 20:09:06.274418 kernel: scsi host6: ahci Apr 13 20:09:06.274604 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 13 20:09:06.274617 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 13 20:09:06.274627 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 13 20:09:06.274637 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 13 20:09:06.274646 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 13 20:09:06.274656 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 13 20:09:06.381691 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:06.391392 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:06.414653 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:06.592286 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.592362 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.592377 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.592390 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.595269 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.597265 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.614121 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 13 20:09:06.641871 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 13 20:09:06.642145 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 20:09:06.644280 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 13 20:09:06.644662 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:09:06.653566 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:09:06.653589 kernel: GPT:9289727 != 167739391 Apr 13 20:09:06.655314 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:09:06.657269 kernel: GPT:9289727 != 167739391 Apr 13 20:09:06.659795 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:09:06.662499 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:06.668169 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 20:09:06.705264 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (470) Apr 13 20:09:06.706442 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 20:09:06.713172 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 20:09:06.715176 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (448) Apr 13 20:09:06.729383 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 20:09:06.731738 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 20:09:06.738444 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:09:06.746588 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:09:06.753858 disk-uuid[570]: Primary Header is updated. Apr 13 20:09:06.753858 disk-uuid[570]: Secondary Entries is updated. Apr 13 20:09:06.753858 disk-uuid[570]: Secondary Header is updated. Apr 13 20:09:06.761284 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:06.768567 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:06.775281 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:07.778286 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:07.779800 disk-uuid[571]: The operation has completed successfully. Apr 13 20:09:07.842477 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:09:07.842687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:09:07.859460 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:09:07.867058 sh[588]: Success Apr 13 20:09:07.884274 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 20:09:07.938732 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:09:07.954338 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:09:07.956489 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:09:07.977724 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:09:07.977762 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:07.980998 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:09:07.984508 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:09:07.989342 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:09:07.998269 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:09:08.001121 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:09:08.002739 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:09:08.011558 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:09:08.015364 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:09:08.038898 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:08.038931 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:08.038947 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:08.046278 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:08.046305 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:08.067018 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:08.066744 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:09:08.074168 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:09:08.081388 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:09:08.139982 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:09:08.156383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:09:08.171408 ignition[711]: Ignition 2.19.0 Apr 13 20:09:08.171420 ignition[711]: Stage: fetch-offline Apr 13 20:09:08.171471 ignition[711]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:08.171486 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:08.171593 ignition[711]: parsed url from cmdline: "" Apr 13 20:09:08.176128 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:09:08.171597 ignition[711]: no config URL provided Apr 13 20:09:08.171603 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:09:08.171613 ignition[711]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:09:08.171619 ignition[711]: failed to fetch config: resource requires networking Apr 13 20:09:08.171830 ignition[711]: Ignition finished successfully Apr 13 20:09:08.184707 systemd-networkd[772]: lo: Link UP Apr 13 20:09:08.184723 systemd-networkd[772]: lo: Gained carrier Apr 13 20:09:08.187093 systemd-networkd[772]: Enumeration completed Apr 13 20:09:08.187355 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:09:08.187889 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:08.187894 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:09:08.188685 systemd[1]: Reached target network.target - Network. Apr 13 20:09:08.190060 systemd-networkd[772]: eth0: Link UP Apr 13 20:09:08.190065 systemd-networkd[772]: eth0: Gained carrier Apr 13 20:09:08.190073 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:08.197661 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:09:08.210584 ignition[780]: Ignition 2.19.0 Apr 13 20:09:08.210605 ignition[780]: Stage: fetch Apr 13 20:09:08.210753 ignition[780]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:08.210765 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:08.210841 ignition[780]: parsed url from cmdline: "" Apr 13 20:09:08.210845 ignition[780]: no config URL provided Apr 13 20:09:08.210850 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:09:08.210859 ignition[780]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:09:08.210877 ignition[780]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 13 20:09:08.211008 ignition[780]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:08.411208 ignition[780]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 13 20:09:08.411383 ignition[780]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:08.811731 ignition[780]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 13 20:09:08.811883 ignition[780]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:08.915321 systemd-networkd[772]: eth0: DHCPv4 address 172.239.193.191/24, gateway 172.239.193.1 acquired from 23.213.15.243 Apr 13 20:09:09.612942 ignition[780]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 13 20:09:09.711318 ignition[780]: PUT result: OK Apr 13 20:09:09.711388 ignition[780]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 13 20:09:09.777155 systemd-networkd[772]: eth0: Gained IPv6LL Apr 13 20:09:09.820497 ignition[780]: GET result: OK Apr 13 20:09:09.820595 ignition[780]: parsing config with SHA512: aaab54332f55be3fe2219d1dd6339326cc0f25ae06fd0eadc55870bd019766c49f632d0cfb1cb6f88464260ea2256b03176204127253ff8b03a6e7d1597da8c4 Apr 13 20:09:09.825410 unknown[780]: fetched base config from "system" Apr 13 20:09:09.825676 ignition[780]: fetch: fetch complete Apr 13 20:09:09.825421 unknown[780]: fetched base config from "system" Apr 13 20:09:09.825682 ignition[780]: fetch: fetch passed Apr 13 20:09:09.825427 unknown[780]: fetched user config from "akamai" Apr 13 20:09:09.825723 ignition[780]: Ignition finished successfully Apr 13 20:09:09.829539 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:09:09.835809 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:09:09.860339 ignition[788]: Ignition 2.19.0 Apr 13 20:09:09.860354 ignition[788]: Stage: kargs Apr 13 20:09:09.860527 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:09.864509 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:09:09.860540 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:09.861616 ignition[788]: kargs: kargs passed Apr 13 20:09:09.861660 ignition[788]: Ignition finished successfully Apr 13 20:09:09.872813 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:09:09.888603 ignition[794]: Ignition 2.19.0 Apr 13 20:09:09.889395 ignition[794]: Stage: disks Apr 13 20:09:09.889817 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:09.903617 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:09:09.889839 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:09.913970 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:09:09.891681 ignition[794]: disks: disks passed Apr 13 20:09:09.915677 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:09:09.891758 ignition[794]: Ignition finished successfully Apr 13 20:09:09.917616 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:09:09.919451 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:09:09.921300 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:09:09.929582 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:09:09.952398 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 20:09:09.956997 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:09:09.966396 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:09:10.056274 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:09:10.057394 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:09:10.058690 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:09:10.070342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:09:10.074332 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:09:10.076397 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:09:10.076452 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:09:10.076477 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:09:10.088931 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:09:10.105444 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (810) Apr 13 20:09:10.105479 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:10.105492 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:10.105506 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:10.105525 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:10.105544 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:10.107007 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:09:10.113574 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:09:10.168175 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:09:10.174679 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:09:10.181579 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:09:10.188782 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:09:10.286887 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:09:10.299334 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:09:10.304375 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:09:10.310001 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:09:10.313201 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:10.346119 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:09:10.350838 ignition[926]: INFO : Ignition 2.19.0 Apr 13 20:09:10.350838 ignition[926]: INFO : Stage: mount Apr 13 20:09:10.352932 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:10.352932 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:10.352932 ignition[926]: INFO : mount: mount passed Apr 13 20:09:10.352932 ignition[926]: INFO : Ignition finished successfully Apr 13 20:09:10.354126 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:09:10.361387 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:09:11.063392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:09:11.080510 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (939) Apr 13 20:09:11.080557 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:11.084716 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:11.087540 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:11.096968 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:11.097038 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:11.100095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:09:11.126161 ignition[956]: INFO : Ignition 2.19.0 Apr 13 20:09:11.127349 ignition[956]: INFO : Stage: files Apr 13 20:09:11.128084 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:11.128084 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:11.130231 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:09:11.131294 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:09:11.131294 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:09:11.133994 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:09:11.135352 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:09:11.136434 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:09:11.135398 unknown[956]: wrote ssh authorized keys file for user: core Apr 13 20:09:11.138549 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:09:11.138549 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:09:11.437086 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:09:11.556380 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 13 20:09:12.076014 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:09:12.894100 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:09:12.894100 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:09:12.897864 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:09:12.897864 ignition[956]: INFO : files: files passed Apr 13 20:09:12.897864 ignition[956]: INFO : Ignition finished successfully Apr 13 20:09:12.904643 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:09:12.931758 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:09:12.937676 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:09:12.949626 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:09:12.949794 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:09:12.964137 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:12.964137 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:12.967276 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:12.969404 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:09:12.971812 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:09:12.977427 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:09:13.019489 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:09:13.020602 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:09:13.022918 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:09:13.023927 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:09:13.025787 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:09:13.032430 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:09:13.052318 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:09:13.070609 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:09:13.084534 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:09:13.086913 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:09:13.088040 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:09:13.090107 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:09:13.090221 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:09:13.094366 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:09:13.096117 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:09:13.096963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:09:13.098694 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:09:13.100836 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:09:13.102710 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:09:13.104532 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:09:13.106757 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:09:13.108603 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:09:13.110287 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:09:13.112068 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:09:13.112205 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:09:13.114264 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:09:13.115801 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:09:13.117863 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:09:13.118299 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:09:13.119580 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:09:13.119693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:09:13.122343 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:09:13.122564 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:09:13.124088 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:09:13.124300 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:09:13.131641 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:09:13.134379 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:09:13.136441 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:09:13.136787 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:09:13.137799 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:09:13.137938 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:09:13.149312 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:09:13.149461 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:09:13.161256 ignition[1009]: INFO : Ignition 2.19.0 Apr 13 20:09:13.161256 ignition[1009]: INFO : Stage: umount Apr 13 20:09:13.161256 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:13.161256 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:13.171315 ignition[1009]: INFO : umount: umount passed Apr 13 20:09:13.171315 ignition[1009]: INFO : Ignition finished successfully Apr 13 20:09:13.163140 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:09:13.163292 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:09:13.164681 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:09:13.164772 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:09:13.170364 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:09:13.170603 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:09:13.173494 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:09:13.173745 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:09:13.175095 systemd[1]: Stopped target network.target - Network. Apr 13 20:09:13.175799 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:09:13.175857 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:09:13.178349 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:09:13.179279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:09:13.183421 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:09:13.184579 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:09:13.208204 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:09:13.210126 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:09:13.210176 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:09:13.212278 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:09:13.212323 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:09:13.213923 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:09:13.213975 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:09:13.215756 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:09:13.215806 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:09:13.218301 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:09:13.220355 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:09:13.223608 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:09:13.224400 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:09:13.224431 systemd-networkd[772]: eth0: DHCPv6 lease lost Apr 13 20:09:13.224509 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:09:13.227595 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:09:13.227785 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:09:13.232887 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:09:13.233037 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:09:13.238854 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:09:13.238918 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:09:13.240675 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:09:13.240743 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:09:13.249951 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:09:13.251324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:09:13.251381 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:09:13.253339 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:09:13.253389 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:09:13.255602 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:09:13.255655 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:09:13.257006 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:09:13.257055 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:09:13.258876 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:09:13.273899 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:09:13.274031 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:09:13.285017 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:09:13.285220 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:09:13.287274 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:09:13.287347 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:09:13.289095 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:09:13.289139 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:09:13.291069 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:09:13.291123 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:09:13.294041 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:09:13.294110 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:09:13.295936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:09:13.295989 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:13.307858 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:09:13.309756 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:09:13.309818 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:09:13.312592 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:09:13.312647 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:09:13.313798 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:09:13.313866 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:09:13.314652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:09:13.314703 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:13.317857 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:09:13.317988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:09:13.319865 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:09:13.328269 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:09:13.338829 systemd[1]: Switching root. Apr 13 20:09:13.378490 systemd-journald[178]: Journal stopped Apr 13 20:09:05.007759 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:09:05.007779 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:05.007788 kernel: BIOS-provided physical RAM map: Apr 13 20:09:05.007794 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 13 20:09:05.007799 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 13 20:09:05.007808 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 13 20:09:05.007814 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 13 20:09:05.007820 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 13 20:09:05.007826 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 20:09:05.007832 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 13 20:09:05.007838 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:09:05.007844 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 13 20:09:05.007850 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 13 20:09:05.007858 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 13 20:09:05.007865 kernel: NX (Execute Disable) protection: active Apr 13 20:09:05.007871 kernel: APIC: Static calls initialized Apr 13 20:09:05.007877 kernel: SMBIOS 2.8 present. Apr 13 20:09:05.007884 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 13 20:09:05.007890 kernel: Hypervisor detected: KVM Apr 13 20:09:05.007898 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:09:05.007905 kernel: kvm-clock: using sched offset of 5771616180 cycles Apr 13 20:09:05.007911 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:09:05.007918 kernel: tsc: Detected 2000.000 MHz processor Apr 13 20:09:05.007924 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:09:05.007931 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:09:05.007937 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 13 20:09:05.007944 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 13 20:09:05.007950 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:09:05.007959 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 13 20:09:05.007965 kernel: Using GB pages for direct mapping Apr 13 20:09:05.007971 kernel: ACPI: Early table checksum verification disabled Apr 13 20:09:05.007977 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 13 20:09:05.007984 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.007990 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.007996 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.008003 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 13 20:09:05.008009 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.008018 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.008024 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.008030 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:05.008040 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 13 20:09:05.008047 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 13 20:09:05.008053 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 13 20:09:05.008063 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 13 20:09:05.008069 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 13 20:09:05.008076 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 13 20:09:05.008082 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 13 20:09:05.008089 kernel: No NUMA configuration found Apr 13 20:09:05.008095 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 13 20:09:05.008102 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Apr 13 20:09:05.008109 kernel: Zone ranges: Apr 13 20:09:05.008118 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:09:05.008124 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:09:05.008131 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:09:05.008138 kernel: Movable zone start for each node Apr 13 20:09:05.008145 kernel: Early memory node ranges Apr 13 20:09:05.008151 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 13 20:09:05.008158 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 13 20:09:05.008736 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:09:05.008746 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 13 20:09:05.008753 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:09:05.008764 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 13 20:09:05.009096 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 13 20:09:05.009108 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:09:05.009115 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:09:05.009122 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:09:05.009129 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:09:05.009136 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:09:05.009142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:09:05.009149 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:09:05.009160 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:09:05.009166 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:09:05.009173 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:09:05.009180 kernel: TSC deadline timer available Apr 13 20:09:05.009187 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:09:05.009194 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:09:05.009200 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 20:09:05.009207 kernel: kvm-guest: setup PV sched yield Apr 13 20:09:05.009214 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 13 20:09:05.009223 kernel: Booting paravirtualized kernel on KVM Apr 13 20:09:05.009230 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:09:05.009237 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:09:05.009805 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:09:05.009813 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:09:05.009819 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:09:05.009826 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:09:05.009833 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:09:05.009841 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:05.009852 kernel: random: crng init done Apr 13 20:09:05.009859 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:09:05.009866 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:09:05.009872 kernel: Fallback order for Node 0: 0 Apr 13 20:09:05.009880 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 13 20:09:05.009887 kernel: Policy zone: Normal Apr 13 20:09:05.009893 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:09:05.009900 kernel: software IO TLB: area num 2. Apr 13 20:09:05.009910 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 227300K reserved, 0K cma-reserved) Apr 13 20:09:05.009917 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:09:05.009924 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:09:05.009930 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:09:05.009937 kernel: Dynamic Preempt: voluntary Apr 13 20:09:05.009944 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:09:05.009951 kernel: rcu: RCU event tracing is enabled. Apr 13 20:09:05.009959 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:09:05.009966 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:09:05.009975 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:09:05.009982 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:09:05.009989 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:09:05.009996 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:09:05.010003 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:09:05.010010 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:09:05.010016 kernel: Console: colour VGA+ 80x25 Apr 13 20:09:05.010023 kernel: printk: console [tty0] enabled Apr 13 20:09:05.010030 kernel: printk: console [ttyS0] enabled Apr 13 20:09:05.010037 kernel: ACPI: Core revision 20230628 Apr 13 20:09:05.010046 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 20:09:05.010053 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:09:05.010060 kernel: x2apic enabled Apr 13 20:09:05.010074 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:09:05.010084 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 20:09:05.010091 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 20:09:05.010098 kernel: kvm-guest: setup PV IPIs Apr 13 20:09:05.010106 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 20:09:05.010113 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 13 20:09:05.010120 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Apr 13 20:09:05.010127 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:09:05.010137 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 13 20:09:05.010144 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 13 20:09:05.010151 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:09:05.010159 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 20:09:05.010166 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:09:05.010175 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 13 20:09:05.010183 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:09:05.010190 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:09:05.010197 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 13 20:09:05.010205 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 13 20:09:05.010212 kernel: active return thunk: srso_alias_return_thunk Apr 13 20:09:05.010219 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 13 20:09:05.010226 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 13 20:09:05.010236 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:09:05.010254 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:09:05.010261 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:09:05.010268 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:09:05.010276 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:09:05.010284 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:09:05.010291 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 13 20:09:05.010298 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 13 20:09:05.010305 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:09:05.010315 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:09:05.010322 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:09:05.010330 kernel: landlock: Up and running. Apr 13 20:09:05.010337 kernel: SELinux: Initializing. Apr 13 20:09:05.010344 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:09:05.010351 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:09:05.010358 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 13 20:09:05.010366 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:05.010373 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:05.010383 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:05.010390 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 13 20:09:05.010397 kernel: ... version: 0 Apr 13 20:09:05.010404 kernel: ... bit width: 48 Apr 13 20:09:05.010411 kernel: ... generic registers: 6 Apr 13 20:09:05.010418 kernel: ... value mask: 0000ffffffffffff Apr 13 20:09:05.010426 kernel: ... max period: 00007fffffffffff Apr 13 20:09:05.010433 kernel: ... fixed-purpose events: 0 Apr 13 20:09:05.010440 kernel: ... event mask: 000000000000003f Apr 13 20:09:05.010450 kernel: signal: max sigframe size: 3376 Apr 13 20:09:05.010457 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:09:05.010464 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:09:05.010471 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:09:05.010478 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:09:05.010486 kernel: .... node #0, CPUs: #1 Apr 13 20:09:05.010493 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:09:05.010500 kernel: smpboot: Max logical packages: 1 Apr 13 20:09:05.010507 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Apr 13 20:09:05.010516 kernel: devtmpfs: initialized Apr 13 20:09:05.010524 kernel: x86/mm: Memory block size: 128MB Apr 13 20:09:05.010531 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:09:05.010538 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:09:05.010545 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:09:05.010553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:09:05.010560 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:09:05.010567 kernel: audit: type=2000 audit(1776110944.663:1): state=initialized audit_enabled=0 res=1 Apr 13 20:09:05.010574 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:09:05.010583 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:09:05.010591 kernel: cpuidle: using governor menu Apr 13 20:09:05.010598 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:09:05.010605 kernel: dca service started, version 1.12.1 Apr 13 20:09:05.010612 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 20:09:05.010619 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 20:09:05.010627 kernel: PCI: Using configuration type 1 for base access Apr 13 20:09:05.010634 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:09:05.010641 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:09:05.010651 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:09:05.010658 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:09:05.010665 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:09:05.010672 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:09:05.010679 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:09:05.010687 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:09:05.010694 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:09:05.010701 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:09:05.010708 kernel: ACPI: Interpreter enabled Apr 13 20:09:05.010718 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:09:05.010725 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:09:05.010732 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:09:05.010739 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:09:05.010746 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:09:05.010753 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:09:05.010935 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:09:05.011081 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 20:09:05.011214 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 20:09:05.011224 kernel: PCI host bridge to bus 0000:00 Apr 13 20:09:05.016587 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:09:05.016714 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:09:05.016833 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:09:05.016949 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 13 20:09:05.017064 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 20:09:05.017186 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 13 20:09:05.017319 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:09:05.017464 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:09:05.017601 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 20:09:05.017727 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 13 20:09:05.018017 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 13 20:09:05.018147 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 13 20:09:05.021209 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:09:05.021396 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 13 20:09:05.021528 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:09:05.021654 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 13 20:09:05.021778 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 13 20:09:05.021911 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:09:05.022043 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:09:05.022175 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 13 20:09:05.023353 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 13 20:09:05.023485 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 13 20:09:05.023620 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:09:05.023751 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:09:05.023892 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:09:05.024022 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 13 20:09:05.024145 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 13 20:09:05.026285 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:09:05.026422 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 13 20:09:05.026433 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:09:05.026441 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:09:05.026448 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:09:05.026460 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:09:05.026467 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:09:05.026475 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:09:05.026482 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:09:05.026489 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:09:05.026496 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:09:05.026503 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:09:05.026511 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:09:05.026518 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:09:05.026528 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:09:05.026535 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:09:05.026542 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:09:05.026549 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:09:05.026557 kernel: iommu: Default domain type: Translated Apr 13 20:09:05.026564 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:09:05.026571 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:09:05.026579 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:09:05.026586 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 13 20:09:05.026595 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 13 20:09:05.026721 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:09:05.026844 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:09:05.026967 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:09:05.026976 kernel: vgaarb: loaded Apr 13 20:09:05.026984 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 20:09:05.026991 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 20:09:05.026999 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:09:05.027010 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:09:05.027017 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:09:05.027024 kernel: pnp: PnP ACPI init Apr 13 20:09:05.027163 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 20:09:05.027175 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:09:05.027183 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:09:05.027191 kernel: NET: Registered PF_INET protocol family Apr 13 20:09:05.027199 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:09:05.027210 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 20:09:05.027217 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:09:05.027225 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:09:05.027233 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 20:09:05.027254 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 20:09:05.027262 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:09:05.027270 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:09:05.027277 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:09:05.027285 kernel: NET: Registered PF_XDP protocol family Apr 13 20:09:05.027418 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:09:05.027535 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:09:05.027649 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:09:05.027764 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 13 20:09:05.027878 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 20:09:05.028013 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 13 20:09:05.028025 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:09:05.028033 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:09:05.028041 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 13 20:09:05.028054 kernel: Initialise system trusted keyrings Apr 13 20:09:05.028062 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 20:09:05.028070 kernel: Key type asymmetric registered Apr 13 20:09:05.028077 kernel: Asymmetric key parser 'x509' registered Apr 13 20:09:05.028085 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:09:05.028092 kernel: io scheduler mq-deadline registered Apr 13 20:09:05.028100 kernel: io scheduler kyber registered Apr 13 20:09:05.028108 kernel: io scheduler bfq registered Apr 13 20:09:05.028115 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:09:05.028126 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:09:05.028134 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:09:05.028141 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:09:05.028149 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:09:05.028157 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:09:05.028165 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:09:05.028172 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:09:05.029549 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:09:05.029568 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:09:05.029692 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:09:05.029813 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:09:04 UTC (1776110944) Apr 13 20:09:05.029931 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 20:09:05.029942 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 13 20:09:05.029950 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:09:05.029958 kernel: Segment Routing with IPv6 Apr 13 20:09:05.029966 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:09:05.029974 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:09:05.029986 kernel: Key type dns_resolver registered Apr 13 20:09:05.029994 kernel: IPI shorthand broadcast: enabled Apr 13 20:09:05.030002 kernel: sched_clock: Marking stable (914005310, 330696020)->(1391918180, -147216850) Apr 13 20:09:05.030010 kernel: registered taskstats version 1 Apr 13 20:09:05.030018 kernel: Loading compiled-in X.509 certificates Apr 13 20:09:05.030026 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:09:05.030034 kernel: Key type .fscrypt registered Apr 13 20:09:05.030043 kernel: Key type fscrypt-provisioning registered Apr 13 20:09:05.030051 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:09:05.030061 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:09:05.030069 kernel: ima: No architecture policies found Apr 13 20:09:05.030077 kernel: clk: Disabling unused clocks Apr 13 20:09:05.030085 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:09:05.030094 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:09:05.030102 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:09:05.030110 kernel: Run /init as init process Apr 13 20:09:05.030118 kernel: with arguments: Apr 13 20:09:05.030128 kernel: /init Apr 13 20:09:05.030136 kernel: with environment: Apr 13 20:09:05.030144 kernel: HOME=/ Apr 13 20:09:05.030152 kernel: TERM=linux Apr 13 20:09:05.030162 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:09:05.030172 systemd[1]: Detected virtualization kvm. Apr 13 20:09:05.030181 systemd[1]: Detected architecture x86-64. Apr 13 20:09:05.030189 systemd[1]: Running in initrd. Apr 13 20:09:05.030200 systemd[1]: No hostname configured, using default hostname. Apr 13 20:09:05.030208 systemd[1]: Hostname set to . Apr 13 20:09:05.030217 systemd[1]: Initializing machine ID from random generator. Apr 13 20:09:05.030226 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:09:05.030235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:09:05.031450 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:09:05.031465 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:09:05.031474 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:09:05.031483 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:09:05.031492 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:09:05.031503 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:09:05.031512 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:09:05.031523 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:09:05.031532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:09:05.031541 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:09:05.031549 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:09:05.031558 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:09:05.031567 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:09:05.031576 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:09:05.031585 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:09:05.031594 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:09:05.031605 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:09:05.031614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:09:05.031623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:09:05.031632 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:09:05.031641 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:09:05.031649 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:09:05.031658 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:09:05.031667 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:09:05.031676 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:09:05.031687 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:09:05.031696 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:09:05.031725 systemd-journald[178]: Collecting audit messages is disabled. Apr 13 20:09:05.031744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:05.031756 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:09:05.031768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:09:05.031777 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:09:05.031789 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:09:05.031799 systemd-journald[178]: Journal started Apr 13 20:09:05.031817 systemd-journald[178]: Runtime Journal (/run/log/journal/e2bd1be3df4844f0a4500417c94e153c) is 8.0M, max 78.3M, 70.3M free. Apr 13 20:09:05.036373 systemd-modules-load[179]: Inserted module 'overlay' Apr 13 20:09:05.129656 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:09:05.129678 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:09:05.129693 kernel: Bridge firewalling registered Apr 13 20:09:05.070119 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 13 20:09:05.131851 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:09:05.132814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:05.134677 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:09:05.141369 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:05.144750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:09:05.149368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:09:05.155962 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:09:05.179973 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:05.188569 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:09:05.192894 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:09:05.199289 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:09:05.202534 dracut-cmdline[210]: dracut-dracut-053 Apr 13 20:09:05.204884 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:05.207144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:09:05.216419 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:09:05.246416 systemd-resolved[232]: Positive Trust Anchors: Apr 13 20:09:05.246429 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:09:05.246455 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:09:05.253032 systemd-resolved[232]: Defaulting to hostname 'linux'. Apr 13 20:09:05.254058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:09:05.255299 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:09:05.281259 kernel: SCSI subsystem initialized Apr 13 20:09:05.290261 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:09:05.301268 kernel: iscsi: registered transport (tcp) Apr 13 20:09:05.323106 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:09:05.323184 kernel: QLogic iSCSI HBA Driver Apr 13 20:09:05.380305 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:09:05.386551 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:09:05.427648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:09:05.427692 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:09:05.427720 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:09:05.475281 kernel: raid6: avx2x4 gen() 28782 MB/s Apr 13 20:09:05.494272 kernel: raid6: avx2x2 gen() 23929 MB/s Apr 13 20:09:05.512647 kernel: raid6: avx2x1 gen() 18052 MB/s Apr 13 20:09:05.512687 kernel: raid6: using algorithm avx2x4 gen() 28782 MB/s Apr 13 20:09:05.535775 kernel: raid6: .... xor() 4020 MB/s, rmw enabled Apr 13 20:09:05.535847 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:09:05.559276 kernel: xor: automatically using best checksumming function avx Apr 13 20:09:05.703288 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:09:05.715589 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:09:05.725457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:09:05.737740 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 13 20:09:05.743020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:09:05.750364 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:09:05.767103 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Apr 13 20:09:05.800680 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:09:05.805388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:09:05.880034 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:09:05.889412 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:09:05.907564 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:09:05.914111 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:09:05.915875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:09:05.917325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:09:05.924635 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:09:05.949756 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:09:05.969528 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:09:05.984305 kernel: libata version 3.00 loaded. Apr 13 20:09:05.990270 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:09:05.994258 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 20:09:06.000279 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:09:06.171918 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:06.174649 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:06.175968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:09:06.176152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:06.179261 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:06.224490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:06.235689 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:09:06.235964 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:09:06.251285 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:09:06.251354 kernel: AES CTR mode by8 optimization enabled Apr 13 20:09:06.256988 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:09:06.257322 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:09:06.263292 kernel: scsi host1: ahci Apr 13 20:09:06.263504 kernel: scsi host2: ahci Apr 13 20:09:06.264675 kernel: scsi host3: ahci Apr 13 20:09:06.270051 kernel: scsi host4: ahci Apr 13 20:09:06.273431 kernel: scsi host5: ahci Apr 13 20:09:06.274418 kernel: scsi host6: ahci Apr 13 20:09:06.274604 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 13 20:09:06.274617 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 13 20:09:06.274627 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 13 20:09:06.274637 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 13 20:09:06.274646 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 13 20:09:06.274656 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 13 20:09:06.381691 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:06.391392 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:06.414653 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:06.592286 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.592362 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.592377 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.592390 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.595269 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.597265 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:06.614121 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 13 20:09:06.641871 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 13 20:09:06.642145 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 20:09:06.644280 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 13 20:09:06.644662 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:09:06.653566 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:09:06.653589 kernel: GPT:9289727 != 167739391 Apr 13 20:09:06.655314 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:09:06.657269 kernel: GPT:9289727 != 167739391 Apr 13 20:09:06.659795 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:09:06.662499 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:06.668169 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 20:09:06.705264 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (470) Apr 13 20:09:06.706442 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 20:09:06.713172 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 20:09:06.715176 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (448) Apr 13 20:09:06.729383 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 20:09:06.731738 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 20:09:06.738444 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:09:06.746588 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:09:06.753858 disk-uuid[570]: Primary Header is updated. Apr 13 20:09:06.753858 disk-uuid[570]: Secondary Entries is updated. Apr 13 20:09:06.753858 disk-uuid[570]: Secondary Header is updated. Apr 13 20:09:06.761284 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:06.768567 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:06.775281 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:07.778286 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:07.779800 disk-uuid[571]: The operation has completed successfully. Apr 13 20:09:07.842477 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:09:07.842687 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:09:07.859460 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:09:07.867058 sh[588]: Success Apr 13 20:09:07.884274 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 20:09:07.938732 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:09:07.954338 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:09:07.956489 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:09:07.977724 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:09:07.977762 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:07.980998 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:09:07.984508 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:09:07.989342 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:09:07.998269 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:09:08.001121 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:09:08.002739 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:09:08.011558 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:09:08.015364 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:09:08.038898 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:08.038931 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:08.038947 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:08.046278 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:08.046305 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:08.067018 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:08.066744 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:09:08.074168 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:09:08.081388 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:09:08.139982 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:09:08.156383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:09:08.171408 ignition[711]: Ignition 2.19.0 Apr 13 20:09:08.171420 ignition[711]: Stage: fetch-offline Apr 13 20:09:08.171471 ignition[711]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:08.171486 ignition[711]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:08.171593 ignition[711]: parsed url from cmdline: "" Apr 13 20:09:08.176128 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:09:08.171597 ignition[711]: no config URL provided Apr 13 20:09:08.171603 ignition[711]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:09:08.171613 ignition[711]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:09:08.171619 ignition[711]: failed to fetch config: resource requires networking Apr 13 20:09:08.171830 ignition[711]: Ignition finished successfully Apr 13 20:09:08.184707 systemd-networkd[772]: lo: Link UP Apr 13 20:09:08.184723 systemd-networkd[772]: lo: Gained carrier Apr 13 20:09:08.187093 systemd-networkd[772]: Enumeration completed Apr 13 20:09:08.187355 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:09:08.187889 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:08.187894 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:09:08.188685 systemd[1]: Reached target network.target - Network. Apr 13 20:09:08.190060 systemd-networkd[772]: eth0: Link UP Apr 13 20:09:08.190065 systemd-networkd[772]: eth0: Gained carrier Apr 13 20:09:08.190073 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:08.197661 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:09:08.210584 ignition[780]: Ignition 2.19.0 Apr 13 20:09:08.210605 ignition[780]: Stage: fetch Apr 13 20:09:08.210753 ignition[780]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:08.210765 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:08.210841 ignition[780]: parsed url from cmdline: "" Apr 13 20:09:08.210845 ignition[780]: no config URL provided Apr 13 20:09:08.210850 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:09:08.210859 ignition[780]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:09:08.210877 ignition[780]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 13 20:09:08.211008 ignition[780]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:08.411208 ignition[780]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 13 20:09:08.411383 ignition[780]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:08.811731 ignition[780]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 13 20:09:08.811883 ignition[780]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:08.915321 systemd-networkd[772]: eth0: DHCPv4 address 172.239.193.191/24, gateway 172.239.193.1 acquired from 23.213.15.243 Apr 13 20:09:09.612942 ignition[780]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 13 20:09:09.711318 ignition[780]: PUT result: OK Apr 13 20:09:09.711388 ignition[780]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 13 20:09:09.777155 systemd-networkd[772]: eth0: Gained IPv6LL Apr 13 20:09:09.820497 ignition[780]: GET result: OK Apr 13 20:09:09.820595 ignition[780]: parsing config with SHA512: aaab54332f55be3fe2219d1dd6339326cc0f25ae06fd0eadc55870bd019766c49f632d0cfb1cb6f88464260ea2256b03176204127253ff8b03a6e7d1597da8c4 Apr 13 20:09:09.825410 unknown[780]: fetched base config from "system" Apr 13 20:09:09.825676 ignition[780]: fetch: fetch complete Apr 13 20:09:09.825421 unknown[780]: fetched base config from "system" Apr 13 20:09:09.825682 ignition[780]: fetch: fetch passed Apr 13 20:09:09.825427 unknown[780]: fetched user config from "akamai" Apr 13 20:09:09.825723 ignition[780]: Ignition finished successfully Apr 13 20:09:09.829539 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:09:09.835809 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:09:09.860339 ignition[788]: Ignition 2.19.0 Apr 13 20:09:09.860354 ignition[788]: Stage: kargs Apr 13 20:09:09.860527 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:09.864509 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:09:09.860540 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:09.861616 ignition[788]: kargs: kargs passed Apr 13 20:09:09.861660 ignition[788]: Ignition finished successfully Apr 13 20:09:09.872813 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:09:09.888603 ignition[794]: Ignition 2.19.0 Apr 13 20:09:09.889395 ignition[794]: Stage: disks Apr 13 20:09:09.889817 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:09.903617 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:09:09.889839 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:09.913970 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:09:09.891681 ignition[794]: disks: disks passed Apr 13 20:09:09.915677 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:09:09.891758 ignition[794]: Ignition finished successfully Apr 13 20:09:09.917616 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:09:09.919451 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:09:09.921300 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:09:09.929582 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:09:09.952398 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 20:09:09.956997 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:09:09.966396 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:09:10.056274 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:09:10.057394 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:09:10.058690 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:09:10.070342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:09:10.074332 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:09:10.076397 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:09:10.076452 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:09:10.076477 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:09:10.088931 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:09:10.105444 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (810) Apr 13 20:09:10.105479 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:10.105492 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:10.105506 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:10.105525 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:10.105544 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:10.107007 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:09:10.113574 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:09:10.168175 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:09:10.174679 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:09:10.181579 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:09:10.188782 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:09:10.286887 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:09:10.299334 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:09:10.304375 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:09:10.310001 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:09:10.313201 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:10.346119 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:09:10.350838 ignition[926]: INFO : Ignition 2.19.0 Apr 13 20:09:10.350838 ignition[926]: INFO : Stage: mount Apr 13 20:09:10.352932 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:10.352932 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:10.352932 ignition[926]: INFO : mount: mount passed Apr 13 20:09:10.352932 ignition[926]: INFO : Ignition finished successfully Apr 13 20:09:10.354126 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:09:10.361387 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:09:11.063392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:09:11.080510 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (939) Apr 13 20:09:11.080557 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:11.084716 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:11.087540 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:11.096968 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:11.097038 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:11.100095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:09:11.126161 ignition[956]: INFO : Ignition 2.19.0 Apr 13 20:09:11.127349 ignition[956]: INFO : Stage: files Apr 13 20:09:11.128084 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:11.128084 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:11.130231 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:09:11.131294 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:09:11.131294 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:09:11.133994 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:09:11.135352 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:09:11.136434 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:09:11.135398 unknown[956]: wrote ssh authorized keys file for user: core Apr 13 20:09:11.138549 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:09:11.138549 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:09:11.437086 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:09:11.556380 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:09:11.558346 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 13 20:09:12.076014 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:09:12.894100 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:09:12.894100 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:09:12.897864 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:09:12.897864 ignition[956]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:09:12.897864 ignition[956]: INFO : files: files passed Apr 13 20:09:12.897864 ignition[956]: INFO : Ignition finished successfully Apr 13 20:09:12.904643 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:09:12.931758 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:09:12.937676 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:09:12.949626 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:09:12.949794 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:09:12.964137 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:12.964137 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:12.967276 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:12.969404 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:09:12.971812 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:09:12.977427 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:09:13.019489 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:09:13.020602 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:09:13.022918 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:09:13.023927 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:09:13.025787 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:09:13.032430 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:09:13.052318 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:09:13.070609 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:09:13.084534 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:09:13.086913 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:09:13.088040 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:09:13.090107 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:09:13.090221 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:09:13.094366 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:09:13.096117 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:09:13.096963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:09:13.098694 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:09:13.100836 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:09:13.102710 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:09:13.104532 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:09:13.106757 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:09:13.108603 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:09:13.110287 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:09:13.112068 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:09:13.112205 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:09:13.114264 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:09:13.115801 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:09:13.117863 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:09:13.118299 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:09:13.119580 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:09:13.119693 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:09:13.122343 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:09:13.122564 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:09:13.124088 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:09:13.124300 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:09:13.131641 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:09:13.134379 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:09:13.136441 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:09:13.136787 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:09:13.137799 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:09:13.137938 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:09:13.149312 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:09:13.149461 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:09:13.161256 ignition[1009]: INFO : Ignition 2.19.0 Apr 13 20:09:13.161256 ignition[1009]: INFO : Stage: umount Apr 13 20:09:13.161256 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:13.161256 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:13.171315 ignition[1009]: INFO : umount: umount passed Apr 13 20:09:13.171315 ignition[1009]: INFO : Ignition finished successfully Apr 13 20:09:13.163140 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:09:13.163292 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:09:13.164681 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:09:13.164772 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:09:13.170364 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:09:13.170603 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:09:13.173494 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:09:13.173745 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:09:13.175095 systemd[1]: Stopped target network.target - Network. Apr 13 20:09:13.175799 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:09:13.175857 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:09:13.178349 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:09:13.179279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:09:13.183421 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:09:13.184579 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:09:13.208204 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:09:13.210126 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:09:13.210176 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:09:13.212278 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:09:13.212323 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:09:13.213923 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:09:13.213975 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:09:13.215756 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:09:13.215806 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:09:13.218301 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:09:13.220355 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:09:13.223608 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:09:13.224400 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:09:13.224431 systemd-networkd[772]: eth0: DHCPv6 lease lost Apr 13 20:09:13.224509 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:09:13.227595 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:09:13.227785 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:09:13.232887 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:09:13.233037 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:09:13.238854 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:09:13.238918 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:09:13.240675 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:09:13.240743 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:09:13.249951 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:09:13.251324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:09:13.251381 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:09:13.253339 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:09:13.253389 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:09:13.255602 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:09:13.255655 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:09:13.257006 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:09:13.257055 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:09:13.258876 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:09:13.273899 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:09:13.274031 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:09:13.285017 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:09:13.285220 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:09:13.287274 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:09:13.287347 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:09:13.289095 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:09:13.289139 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:09:13.291069 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:09:13.291123 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:09:13.294041 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:09:13.294110 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:09:13.295936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:09:13.295989 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:13.307858 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:09:13.309756 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:09:13.309818 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:09:13.312592 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:09:13.312647 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:09:13.313798 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:09:13.313866 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:09:13.314652 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:09:13.314703 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:13.317857 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:09:13.317988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:09:13.319865 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:09:13.328269 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:09:13.338829 systemd[1]: Switching root. Apr 13 20:09:13.378490 systemd-journald[178]: Journal stopped Apr 13 20:09:14.629964 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 13 20:09:14.629992 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:09:14.630005 kernel: SELinux: policy capability open_perms=1 Apr 13 20:09:14.630015 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:09:14.630028 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:09:14.630037 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:09:14.630048 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:09:14.630057 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:09:14.630066 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:09:14.630076 kernel: audit: type=1403 audit(1776110953.534:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:09:14.630086 systemd[1]: Successfully loaded SELinux policy in 61.525ms. Apr 13 20:09:14.630100 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.051ms. Apr 13 20:09:14.630111 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:09:14.630122 systemd[1]: Detected virtualization kvm. Apr 13 20:09:14.630132 systemd[1]: Detected architecture x86-64. Apr 13 20:09:14.630143 systemd[1]: Detected first boot. Apr 13 20:09:14.630156 systemd[1]: Initializing machine ID from random generator. Apr 13 20:09:14.630166 zram_generator::config[1052]: No configuration found. Apr 13 20:09:14.630177 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:09:14.630186 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:09:14.630196 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:09:14.630206 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:09:14.630217 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:09:14.630230 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:09:14.630853 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:09:14.630876 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:09:14.630888 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:09:14.630899 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:09:14.630912 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:09:14.630922 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:09:14.630937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:09:14.630948 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:09:14.630958 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:09:14.630969 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:09:14.630979 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:09:14.630990 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:09:14.631000 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:09:14.631011 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:09:14.631031 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:09:14.631042 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:09:14.631056 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:09:14.631070 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:09:14.631089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:09:14.631110 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:09:14.631121 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:09:14.631132 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:09:14.631146 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:09:14.631157 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:09:14.631168 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:09:14.631180 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:09:14.631190 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:09:14.631204 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:09:14.631215 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:09:14.631225 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:09:14.631236 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:09:14.631263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:14.631274 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:09:14.631284 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:09:14.631295 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:09:14.631309 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:09:14.631320 systemd[1]: Reached target machines.target - Containers. Apr 13 20:09:14.631331 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:09:14.631341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:09:14.631352 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:09:14.631362 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:09:14.631373 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:09:14.631384 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:09:14.631397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:09:14.631408 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:09:14.631418 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:09:14.631431 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:09:14.631441 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:09:14.631452 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:09:14.631462 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:09:14.631473 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:09:14.631486 kernel: loop: module loaded Apr 13 20:09:14.631496 kernel: fuse: init (API version 7.39) Apr 13 20:09:14.631506 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:09:14.631516 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:09:14.631527 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:09:14.631537 kernel: ACPI: bus type drm_connector registered Apr 13 20:09:14.631744 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:09:14.631754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:09:14.631765 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:09:14.631778 systemd[1]: Stopped verity-setup.service. Apr 13 20:09:14.631791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:14.631833 systemd-journald[1132]: Collecting audit messages is disabled. Apr 13 20:09:14.631856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:09:14.631878 systemd-journald[1132]: Journal started Apr 13 20:09:14.631897 systemd-journald[1132]: Runtime Journal (/run/log/journal/4c155cbf2405418c8f986fc2008d8c39) is 8.0M, max 78.3M, 70.3M free. Apr 13 20:09:14.637110 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:09:14.196343 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:09:14.218896 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 20:09:14.219476 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:09:14.647190 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:09:14.644663 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:09:14.645534 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:09:14.646404 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:09:14.647760 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:09:14.649022 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:09:14.650180 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:09:14.651602 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:09:14.652029 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:09:14.653575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:09:14.653795 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:09:14.655107 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:09:14.655354 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:09:14.656504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:09:14.656739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:09:14.658183 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:09:14.658625 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:09:14.660186 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:09:14.660381 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:09:14.661640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:09:14.685829 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:09:14.687203 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:09:14.705808 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:09:14.714799 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:09:14.721304 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:09:14.723138 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:09:14.723229 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:09:14.725231 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:09:14.732805 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:09:14.739430 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:09:14.740357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:09:14.743777 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:09:14.752405 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:09:14.753222 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:09:14.754365 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:09:14.755752 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:09:14.759383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:09:14.764409 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:09:14.770404 systemd-journald[1132]: Time spent on flushing to /var/log/journal/4c155cbf2405418c8f986fc2008d8c39 is 28.668ms for 974 entries. Apr 13 20:09:14.770404 systemd-journald[1132]: System Journal (/var/log/journal/4c155cbf2405418c8f986fc2008d8c39) is 8.0M, max 195.6M, 187.6M free. Apr 13 20:09:14.832460 systemd-journald[1132]: Received client request to flush runtime journal. Apr 13 20:09:14.775380 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:09:14.778989 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:09:14.782278 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:09:14.785821 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:09:14.791386 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:09:14.801641 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:09:14.836692 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:09:14.838618 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:09:14.843986 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:09:14.868397 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:09:14.887985 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:09:14.900270 kernel: loop0: detected capacity change from 0 to 142488 Apr 13 20:09:14.902938 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 20:09:14.910474 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Apr 13 20:09:14.911218 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:09:14.911832 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:09:14.912356 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Apr 13 20:09:14.922148 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:09:14.935375 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:09:14.956269 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:09:14.988293 kernel: loop1: detected capacity change from 0 to 8 Apr 13 20:09:14.987419 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:09:14.996799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:09:15.025263 kernel: loop2: detected capacity change from 0 to 219192 Apr 13 20:09:15.059176 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Apr 13 20:09:15.060068 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Apr 13 20:09:15.073104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:09:15.079293 kernel: loop3: detected capacity change from 0 to 140768 Apr 13 20:09:15.128265 kernel: loop4: detected capacity change from 0 to 142488 Apr 13 20:09:15.153263 kernel: loop5: detected capacity change from 0 to 8 Apr 13 20:09:15.157278 kernel: loop6: detected capacity change from 0 to 219192 Apr 13 20:09:15.184271 kernel: loop7: detected capacity change from 0 to 140768 Apr 13 20:09:15.211203 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 13 20:09:15.212549 (sd-merge)[1201]: Merged extensions into '/usr'. Apr 13 20:09:15.220358 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:09:15.220373 systemd[1]: Reloading... Apr 13 20:09:15.331268 zram_generator::config[1227]: No configuration found. Apr 13 20:09:15.342285 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:09:15.471983 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:15.515975 systemd[1]: Reloading finished in 294 ms. Apr 13 20:09:15.549919 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:09:15.551192 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:09:15.552408 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:09:15.565446 systemd[1]: Starting ensure-sysext.service... Apr 13 20:09:15.567430 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:09:15.571801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:09:15.576340 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:09:15.576361 systemd[1]: Reloading... Apr 13 20:09:15.617862 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:09:15.618210 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:09:15.621259 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:09:15.621526 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Apr 13 20:09:15.621606 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Apr 13 20:09:15.621719 systemd-udevd[1273]: Using default interface naming scheme 'v255'. Apr 13 20:09:15.628372 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:09:15.628384 systemd-tmpfiles[1272]: Skipping /boot Apr 13 20:09:15.643513 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:09:15.643563 systemd-tmpfiles[1272]: Skipping /boot Apr 13 20:09:15.676287 zram_generator::config[1298]: No configuration found. Apr 13 20:09:15.839299 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 13 20:09:15.856619 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:09:15.856664 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 20:09:15.862931 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 20:09:15.863184 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 20:09:15.862911 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:15.922696 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:09:15.923966 systemd[1]: Reloading finished in 347 ms. Apr 13 20:09:15.939344 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 13 20:09:15.952949 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:09:15.954801 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:09:15.979264 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1309) Apr 13 20:09:15.998428 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:09:16.003925 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:09:16.010297 kernel: EDAC MC: Ver: 3.0.0 Apr 13 20:09:16.018490 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:09:16.023395 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:09:16.035423 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:09:16.043554 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:09:16.049295 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:09:16.076387 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:16.076753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:09:16.083657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:09:16.087018 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:09:16.095862 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:09:16.097517 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:09:16.101519 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:09:16.109507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:16.110287 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:16.112041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:09:16.112226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:09:16.118230 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:09:16.118447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:09:16.120907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:09:16.121089 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:09:16.128564 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:09:16.133056 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:09:16.150037 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:16.150530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:09:16.155515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:09:16.159312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:09:16.166472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:09:16.167378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:09:16.168555 augenrules[1406]: No rules Apr 13 20:09:16.171491 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:09:16.178322 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:09:16.179064 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:16.181749 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:09:16.183250 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:09:16.187572 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:09:16.190647 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:09:16.193967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:09:16.194126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:09:16.195448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:09:16.195602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:09:16.197091 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:09:16.197301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:09:16.215217 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:16.215637 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:09:16.222316 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:09:16.230329 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:09:16.237469 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:09:16.243945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:09:16.251262 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:09:16.256625 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:09:16.257705 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:09:16.257836 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:16.260694 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:09:16.263671 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:09:16.265874 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:09:16.270853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:09:16.271022 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:09:16.276939 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:09:16.278642 systemd[1]: Finished ensure-sysext.service. Apr 13 20:09:16.279970 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:09:16.281330 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:09:16.282641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:09:16.282810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:09:16.284099 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:09:16.284476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:09:16.300072 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:09:16.308661 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:09:16.311269 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:09:16.311344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:09:16.320393 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 20:09:16.321162 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:09:16.350261 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:09:16.383317 systemd-networkd[1382]: lo: Link UP Apr 13 20:09:16.383329 systemd-networkd[1382]: lo: Gained carrier Apr 13 20:09:16.392312 systemd-networkd[1382]: Enumeration completed Apr 13 20:09:16.392404 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:09:16.393007 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:16.393017 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:09:16.397379 systemd-networkd[1382]: eth0: Link UP Apr 13 20:09:16.397387 systemd-networkd[1382]: eth0: Gained carrier Apr 13 20:09:16.397399 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:16.416208 systemd-resolved[1383]: Positive Trust Anchors: Apr 13 20:09:16.416500 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:09:16.416578 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:09:16.436793 systemd-resolved[1383]: Defaulting to hostname 'linux'. Apr 13 20:09:16.438202 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:09:16.440600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:16.441924 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:09:16.444952 systemd[1]: Reached target network.target - Network. Apr 13 20:09:16.445709 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:09:16.454433 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:09:16.455385 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 20:09:16.456486 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:09:16.457476 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:09:16.458316 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:09:16.459133 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:09:16.460208 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:09:16.460255 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:09:16.460959 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:09:16.461965 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:09:16.463028 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:09:16.463827 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:09:16.465163 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:09:16.467561 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:09:16.473408 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:09:16.475114 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:09:16.476030 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:09:16.477082 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:09:16.477880 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:09:16.477917 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:09:16.482331 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:09:16.484357 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:09:16.491443 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:09:16.494338 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:09:16.500396 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:09:16.503430 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:09:16.506431 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:09:16.510632 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:09:16.518437 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:09:16.523354 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:09:16.525487 jq[1456]: false Apr 13 20:09:16.534440 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:09:16.536388 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:09:16.536909 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:09:16.540402 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:09:16.543386 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:09:16.552678 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:09:16.552880 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:09:16.566011 dbus-daemon[1455]: [system] SELinux support is enabled Apr 13 20:09:16.570071 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:09:16.578530 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:09:16.578580 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:09:16.580618 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:09:16.580648 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:09:16.591849 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:09:16.592077 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:09:16.593824 (ntainerd)[1472]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:09:16.611555 jq[1466]: true Apr 13 20:09:16.623271 tar[1468]: linux-amd64/LICENSE Apr 13 20:09:16.623271 tar[1468]: linux-amd64/helm Apr 13 20:09:16.623636 update_engine[1465]: I20260413 20:09:16.621757 1465 main.cc:92] Flatcar Update Engine starting Apr 13 20:09:16.624087 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:09:16.633428 update_engine[1465]: I20260413 20:09:16.624140 1465 update_check_scheduler.cc:74] Next update check in 5m59s Apr 13 20:09:16.632430 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:09:16.652268 extend-filesystems[1457]: Found loop4 Apr 13 20:09:16.652268 extend-filesystems[1457]: Found loop5 Apr 13 20:09:16.652268 extend-filesystems[1457]: Found loop6 Apr 13 20:09:16.652268 extend-filesystems[1457]: Found loop7 Apr 13 20:09:16.652268 extend-filesystems[1457]: Found sda Apr 13 20:09:16.652268 extend-filesystems[1457]: Found sda1 Apr 13 20:09:16.652268 extend-filesystems[1457]: Found sda2 Apr 13 20:09:16.652268 extend-filesystems[1457]: Found sda3 Apr 13 20:09:16.652268 extend-filesystems[1457]: Found usr Apr 13 20:09:16.652268 extend-filesystems[1457]: Found sda4 Apr 13 20:09:16.652268 extend-filesystems[1457]: Found sda6 Apr 13 20:09:16.652268 extend-filesystems[1457]: Found sda7 Apr 13 20:09:16.652268 extend-filesystems[1457]: Found sda9 Apr 13 20:09:16.652268 extend-filesystems[1457]: Checking size of /dev/sda9 Apr 13 20:09:16.724443 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 13 20:09:16.677756 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:09:16.724604 jq[1486]: true Apr 13 20:09:16.724705 extend-filesystems[1457]: Resized partition /dev/sda9 Apr 13 20:09:16.729307 coreos-metadata[1454]: Apr 13 20:09:16.674 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 13 20:09:16.678016 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:09:16.730839 extend-filesystems[1498]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:09:16.744050 systemd-logind[1464]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 20:09:16.744078 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:09:16.758109 systemd-logind[1464]: New seat seat0. Apr 13 20:09:16.760576 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:09:16.805704 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1301) Apr 13 20:09:16.853869 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:09:16.857866 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:09:16.887094 systemd[1]: Starting sshkeys.service... Apr 13 20:09:16.924194 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:09:16.934568 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:09:17.012875 coreos-metadata[1523]: Apr 13 20:09:17.012 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 13 20:09:17.024719 containerd[1472]: time="2026-04-13T20:09:17.024650600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:09:17.035334 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:09:17.049455 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:09:17.067064 containerd[1472]: time="2026-04-13T20:09:17.066955070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:17.072466 containerd[1472]: time="2026-04-13T20:09:17.071929370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:09:17.072466 containerd[1472]: time="2026-04-13T20:09:17.071960710Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:09:17.072466 containerd[1472]: time="2026-04-13T20:09:17.071983660Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:09:17.072466 containerd[1472]: time="2026-04-13T20:09:17.072195610Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:09:17.072466 containerd[1472]: time="2026-04-13T20:09:17.072227200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:17.072466 containerd[1472]: time="2026-04-13T20:09:17.072361720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:09:17.072466 containerd[1472]: time="2026-04-13T20:09:17.072391190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:17.072941 containerd[1472]: time="2026-04-13T20:09:17.072919440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:09:17.074285 containerd[1472]: time="2026-04-13T20:09:17.073292740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:17.074285 containerd[1472]: time="2026-04-13T20:09:17.073315950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:09:17.074285 containerd[1472]: time="2026-04-13T20:09:17.073328870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:17.074285 containerd[1472]: time="2026-04-13T20:09:17.073453920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:17.074285 containerd[1472]: time="2026-04-13T20:09:17.073786730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:17.074587 containerd[1472]: time="2026-04-13T20:09:17.074559170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:09:17.075060 containerd[1472]: time="2026-04-13T20:09:17.075036300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:09:17.075298 containerd[1472]: time="2026-04-13T20:09:17.075280360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:09:17.075409 containerd[1472]: time="2026-04-13T20:09:17.075392700Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:09:17.088649 containerd[1472]: time="2026-04-13T20:09:17.088625490Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:09:17.088741 containerd[1472]: time="2026-04-13T20:09:17.088726100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:09:17.089254 containerd[1472]: time="2026-04-13T20:09:17.089210630Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:09:17.089340 containerd[1472]: time="2026-04-13T20:09:17.089318230Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:09:17.089447 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:09:17.090482 containerd[1472]: time="2026-04-13T20:09:17.090456500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:09:17.090703 containerd[1472]: time="2026-04-13T20:09:17.090680510Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:09:17.093662 containerd[1472]: time="2026-04-13T20:09:17.091217560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:09:17.093883 containerd[1472]: time="2026-04-13T20:09:17.093859160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:09:17.093958 containerd[1472]: time="2026-04-13T20:09:17.093942050Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:09:17.094031 containerd[1472]: time="2026-04-13T20:09:17.094010390Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:09:17.094105 containerd[1472]: time="2026-04-13T20:09:17.094088690Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:09:17.094173 containerd[1472]: time="2026-04-13T20:09:17.094153140Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:09:17.096272 containerd[1472]: time="2026-04-13T20:09:17.094229600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:09:17.096373 containerd[1472]: time="2026-04-13T20:09:17.096346490Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096424670Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096453370Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096475390Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096491140Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096521690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096538140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096549410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096576240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096596460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096616230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096633600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096653850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096675450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097312 containerd[1472]: time="2026-04-13T20:09:17.096689190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096700170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096716990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096734920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096771730Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096800190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096812010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096821980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096868560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096896020Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096914110Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096933160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096943580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096954860Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:09:17.097638 containerd[1472]: time="2026-04-13T20:09:17.096964460Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:09:17.097940 containerd[1472]: time="2026-04-13T20:09:17.096975380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:09:17.100439 containerd[1472]: time="2026-04-13T20:09:17.100375050Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:09:17.100654 containerd[1472]: time="2026-04-13T20:09:17.100632800Z" level=info msg="Connect containerd service" Apr 13 20:09:17.101315 containerd[1472]: time="2026-04-13T20:09:17.100742080Z" level=info msg="using legacy CRI server" Apr 13 20:09:17.101385 containerd[1472]: time="2026-04-13T20:09:17.101367500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:09:17.101539 containerd[1472]: time="2026-04-13T20:09:17.101518220Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:09:17.133483 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 13 20:09:17.133532 containerd[1472]: time="2026-04-13T20:09:17.102375890Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:09:17.133532 containerd[1472]: time="2026-04-13T20:09:17.102695030Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:09:17.133532 containerd[1472]: time="2026-04-13T20:09:17.102766400Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:09:17.133532 containerd[1472]: time="2026-04-13T20:09:17.102841840Z" level=info msg="Start subscribing containerd event" Apr 13 20:09:17.133532 containerd[1472]: time="2026-04-13T20:09:17.102898770Z" level=info msg="Start recovering state" Apr 13 20:09:17.133532 containerd[1472]: time="2026-04-13T20:09:17.102970630Z" level=info msg="Start event monitor" Apr 13 20:09:17.133532 containerd[1472]: time="2026-04-13T20:09:17.103017450Z" level=info msg="Start snapshots syncer" Apr 13 20:09:17.133532 containerd[1472]: time="2026-04-13T20:09:17.103029990Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:09:17.133532 containerd[1472]: time="2026-04-13T20:09:17.103041750Z" level=info msg="Start streaming server" Apr 13 20:09:17.133532 containerd[1472]: time="2026-04-13T20:09:17.103110700Z" level=info msg="containerd successfully booted in 0.081716s" Apr 13 20:09:17.103404 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:09:17.104771 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:09:17.135918 extend-filesystems[1498]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 20:09:17.135918 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 13 20:09:17.135918 extend-filesystems[1498]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 13 20:09:17.145531 extend-filesystems[1457]: Resized filesystem in /dev/sda9 Apr 13 20:09:17.141591 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:09:17.141872 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:09:17.152002 systemd-networkd[1382]: eth0: DHCPv4 address 172.239.193.191/24, gateway 172.239.193.1 acquired from 23.213.15.243 Apr 13 20:09:17.152852 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:09:17.153196 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:09:17.156075 dbus-daemon[1455]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1382 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 20:09:17.158557 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Apr 13 20:09:17.162840 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:09:17.176288 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 20:09:17.183401 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:09:17.194120 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:09:17.198541 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:09:17.199529 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:09:17.271130 dbus-daemon[1455]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 20:09:17.271357 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 20:09:17.273024 dbus-daemon[1455]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1549 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 20:09:17.281612 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 20:09:17.292362 polkitd[1553]: Started polkitd version 121 Apr 13 20:09:17.295908 polkitd[1553]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 20:09:17.295962 polkitd[1553]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 20:09:17.298807 polkitd[1553]: Finished loading, compiling and executing 2 rules Apr 13 20:09:17.299194 dbus-daemon[1455]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 20:09:17.299381 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 20:09:17.300307 polkitd[1553]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 20:09:17.312428 systemd-hostnamed[1549]: Hostname set to <172-239-193-191> (transient) Apr 13 20:09:17.312901 systemd-resolved[1383]: System hostname changed to '172-239-193-191'. Apr 13 20:09:18.736830 systemd-timesyncd[1445]: Contacted time server 99.28.14.242:123 (0.flatcar.pool.ntp.org). Apr 13 20:09:18.737077 systemd-resolved[1383]: Clock change detected. Flushing caches. Apr 13 20:09:18.737183 systemd-timesyncd[1445]: Initial clock synchronization to Mon 2026-04-13 20:09:18.736416 UTC. Apr 13 20:09:18.812554 systemd-networkd[1382]: eth0: Gained IPv6LL Apr 13 20:09:18.817460 tar[1468]: linux-amd64/README.md Apr 13 20:09:18.817252 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:09:18.822515 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:09:18.830701 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:18.835691 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:09:18.837887 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:09:18.859294 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:09:19.043147 coreos-metadata[1454]: Apr 13 20:09:19.043 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 13 20:09:19.140013 coreos-metadata[1454]: Apr 13 20:09:19.139 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 13 20:09:19.326592 coreos-metadata[1454]: Apr 13 20:09:19.326 INFO Fetch successful Apr 13 20:09:19.326592 coreos-metadata[1454]: Apr 13 20:09:19.326 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 13 20:09:19.381131 coreos-metadata[1523]: Apr 13 20:09:19.380 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 13 20:09:19.474312 coreos-metadata[1523]: Apr 13 20:09:19.474 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 13 20:09:19.586954 coreos-metadata[1454]: Apr 13 20:09:19.586 INFO Fetch successful Apr 13 20:09:19.607328 coreos-metadata[1523]: Apr 13 20:09:19.606 INFO Fetch successful Apr 13 20:09:19.631735 update-ssh-keys[1584]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:09:19.636684 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:09:19.647694 systemd[1]: Finished sshkeys.service. Apr 13 20:09:19.710035 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:09:19.712088 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:09:19.798033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:19.800741 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:09:19.805568 systemd[1]: Startup finished in 1.058s (kernel) + 8.797s (initrd) + 4.975s (userspace) = 14.831s. Apr 13 20:09:19.843341 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:09:20.502649 kubelet[1608]: E0413 20:09:20.502565 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:09:20.506574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:09:20.506905 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:09:21.105026 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:09:21.111076 systemd[1]: Started sshd@0-172.239.193.191:22-50.85.169.122:47012.service - OpenSSH per-connection server daemon (50.85.169.122:47012). Apr 13 20:09:21.827454 sshd[1620]: Accepted publickey for core from 50.85.169.122 port 47012 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:21.829343 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:21.840509 systemd-logind[1464]: New session 1 of user core. Apr 13 20:09:21.841512 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:09:21.851297 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:09:21.865180 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:09:21.872719 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:09:21.885377 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:09:21.995493 systemd[1624]: Queued start job for default target default.target. Apr 13 20:09:22.001757 systemd[1624]: Created slice app.slice - User Application Slice. Apr 13 20:09:22.001785 systemd[1624]: Reached target paths.target - Paths. Apr 13 20:09:22.001800 systemd[1624]: Reached target timers.target - Timers. Apr 13 20:09:22.003666 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:09:22.025118 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:09:22.025251 systemd[1624]: Reached target sockets.target - Sockets. Apr 13 20:09:22.025267 systemd[1624]: Reached target basic.target - Basic System. Apr 13 20:09:22.025308 systemd[1624]: Reached target default.target - Main User Target. Apr 13 20:09:22.025346 systemd[1624]: Startup finished in 131ms. Apr 13 20:09:22.025686 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:09:22.036680 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:09:22.550551 systemd[1]: Started sshd@1-172.239.193.191:22-50.85.169.122:47018.service - OpenSSH per-connection server daemon (50.85.169.122:47018). Apr 13 20:09:23.269908 sshd[1635]: Accepted publickey for core from 50.85.169.122 port 47018 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:23.271484 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:23.276357 systemd-logind[1464]: New session 2 of user core. Apr 13 20:09:23.291589 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:09:23.776561 sshd[1635]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:23.780124 systemd[1]: sshd@1-172.239.193.191:22-50.85.169.122:47018.service: Deactivated successfully. Apr 13 20:09:23.782133 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:09:23.783603 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:09:23.785324 systemd-logind[1464]: Removed session 2. Apr 13 20:09:23.903380 systemd[1]: Started sshd@2-172.239.193.191:22-50.85.169.122:47022.service - OpenSSH per-connection server daemon (50.85.169.122:47022). Apr 13 20:09:24.608646 sshd[1642]: Accepted publickey for core from 50.85.169.122 port 47022 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:24.610573 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:24.615555 systemd-logind[1464]: New session 3 of user core. Apr 13 20:09:24.620566 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:09:25.104604 sshd[1642]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:25.109413 systemd[1]: sshd@2-172.239.193.191:22-50.85.169.122:47022.service: Deactivated successfully. Apr 13 20:09:25.114280 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:09:25.114890 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:09:25.115775 systemd-logind[1464]: Removed session 3. Apr 13 20:09:25.229001 systemd[1]: Started sshd@3-172.239.193.191:22-50.85.169.122:47032.service - OpenSSH per-connection server daemon (50.85.169.122:47032). Apr 13 20:09:25.939149 sshd[1649]: Accepted publickey for core from 50.85.169.122 port 47032 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:25.939941 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:25.945715 systemd-logind[1464]: New session 4 of user core. Apr 13 20:09:25.951544 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:09:26.441119 sshd[1649]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:26.445119 systemd[1]: sshd@3-172.239.193.191:22-50.85.169.122:47032.service: Deactivated successfully. Apr 13 20:09:26.447053 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:09:26.448367 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:09:26.449593 systemd-logind[1464]: Removed session 4. Apr 13 20:09:26.565143 systemd[1]: Started sshd@4-172.239.193.191:22-50.85.169.122:47046.service - OpenSSH per-connection server daemon (50.85.169.122:47046). Apr 13 20:09:27.271462 sshd[1656]: Accepted publickey for core from 50.85.169.122 port 47046 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:27.273162 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:27.278142 systemd-logind[1464]: New session 5 of user core. Apr 13 20:09:27.284565 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:09:27.666932 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:09:27.667276 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:27.682408 sudo[1659]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:27.796961 sshd[1656]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:27.801466 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:09:27.802386 systemd[1]: sshd@4-172.239.193.191:22-50.85.169.122:47046.service: Deactivated successfully. Apr 13 20:09:27.804389 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:09:27.805503 systemd-logind[1464]: Removed session 5. Apr 13 20:09:27.924644 systemd[1]: Started sshd@5-172.239.193.191:22-50.85.169.122:47048.service - OpenSSH per-connection server daemon (50.85.169.122:47048). Apr 13 20:09:28.628451 sshd[1664]: Accepted publickey for core from 50.85.169.122 port 47048 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:28.629773 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:28.636749 systemd-logind[1464]: New session 6 of user core. Apr 13 20:09:28.642570 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:09:29.015489 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:09:29.015851 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:29.019879 sudo[1668]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:29.025854 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:09:29.026197 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:29.045638 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:09:29.047619 auditctl[1671]: No rules Apr 13 20:09:29.049312 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:09:29.049634 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:09:29.051918 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:09:29.088929 augenrules[1689]: No rules Apr 13 20:09:29.090556 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:09:29.091970 sudo[1667]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:29.206531 sshd[1664]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:29.212111 systemd[1]: sshd@5-172.239.193.191:22-50.85.169.122:47048.service: Deactivated successfully. Apr 13 20:09:29.214763 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:09:29.215562 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:09:29.216886 systemd-logind[1464]: Removed session 6. Apr 13 20:09:29.331344 systemd[1]: Started sshd@6-172.239.193.191:22-50.85.169.122:47058.service - OpenSSH per-connection server daemon (50.85.169.122:47058). Apr 13 20:09:30.042732 sshd[1697]: Accepted publickey for core from 50.85.169.122 port 47058 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:30.043336 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:30.048527 systemd-logind[1464]: New session 7 of user core. Apr 13 20:09:30.055576 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:09:30.431825 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:09:30.432491 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:30.711543 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:09:30.717615 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:09:30.720248 (dockerd)[1716]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:09:30.720575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:30.891601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:30.903119 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:09:30.957708 kubelet[1728]: E0413 20:09:30.957671 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:09:30.965164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:09:30.965357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:09:31.007475 dockerd[1716]: time="2026-04-13T20:09:31.007232689Z" level=info msg="Starting up" Apr 13 20:09:31.080328 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1032138483-merged.mount: Deactivated successfully. Apr 13 20:09:31.088275 systemd[1]: var-lib-docker-metacopy\x2dcheck2036643927-merged.mount: Deactivated successfully. Apr 13 20:09:31.110322 dockerd[1716]: time="2026-04-13T20:09:31.110293949Z" level=info msg="Loading containers: start." Apr 13 20:09:31.221494 kernel: Initializing XFRM netlink socket Apr 13 20:09:31.303444 systemd-networkd[1382]: docker0: Link UP Apr 13 20:09:31.314864 dockerd[1716]: time="2026-04-13T20:09:31.314835749Z" level=info msg="Loading containers: done." Apr 13 20:09:31.328990 dockerd[1716]: time="2026-04-13T20:09:31.328952459Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:09:31.329135 dockerd[1716]: time="2026-04-13T20:09:31.329033809Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:09:31.329190 dockerd[1716]: time="2026-04-13T20:09:31.329140619Z" level=info msg="Daemon has completed initialization" Apr 13 20:09:31.358978 dockerd[1716]: time="2026-04-13T20:09:31.358935899Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:09:31.359312 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:09:31.829229 containerd[1472]: time="2026-04-13T20:09:31.829146759Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 13 20:09:32.075224 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2488298452-merged.mount: Deactivated successfully. Apr 13 20:09:32.512811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379609801.mount: Deactivated successfully. Apr 13 20:09:33.655113 containerd[1472]: time="2026-04-13T20:09:33.655068999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:33.656353 containerd[1472]: time="2026-04-13T20:09:33.656321669Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=26947748" Apr 13 20:09:33.658442 containerd[1472]: time="2026-04-13T20:09:33.657003309Z" level=info msg="ImageCreate event name:\"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:33.660013 containerd[1472]: time="2026-04-13T20:09:33.659983299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:33.665116 containerd[1472]: time="2026-04-13T20:09:33.664795169Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"26944341\" in 1.83556021s" Apr 13 20:09:33.665748 containerd[1472]: time="2026-04-13T20:09:33.665586059Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\"" Apr 13 20:09:33.666594 containerd[1472]: time="2026-04-13T20:09:33.666573249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 13 20:09:34.946957 containerd[1472]: time="2026-04-13T20:09:34.946919449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:34.948046 containerd[1472]: time="2026-04-13T20:09:34.947987779Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=21165818" Apr 13 20:09:34.948483 containerd[1472]: time="2026-04-13T20:09:34.948444169Z" level=info msg="ImageCreate event name:\"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:34.953443 containerd[1472]: time="2026-04-13T20:09:34.951831039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:34.956592 containerd[1472]: time="2026-04-13T20:09:34.956554649Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"22695997\" in 1.28994998s" Apr 13 20:09:34.956718 containerd[1472]: time="2026-04-13T20:09:34.956697519Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\"" Apr 13 20:09:34.957474 containerd[1472]: time="2026-04-13T20:09:34.957408249Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 13 20:09:36.058490 containerd[1472]: time="2026-04-13T20:09:36.057094469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:36.059149 containerd[1472]: time="2026-04-13T20:09:36.059099199Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=15729853" Apr 13 20:09:36.059648 containerd[1472]: time="2026-04-13T20:09:36.059605589Z" level=info msg="ImageCreate event name:\"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:36.063267 containerd[1472]: time="2026-04-13T20:09:36.063223969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:36.064762 containerd[1472]: time="2026-04-13T20:09:36.064734049Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"17260050\" in 1.10728069s" Apr 13 20:09:36.064830 containerd[1472]: time="2026-04-13T20:09:36.064814829Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\"" Apr 13 20:09:36.066665 containerd[1472]: time="2026-04-13T20:09:36.066638019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 13 20:09:37.132675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503943595.mount: Deactivated successfully. Apr 13 20:09:37.462849 containerd[1472]: time="2026-04-13T20:09:37.462728029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:37.463973 containerd[1472]: time="2026-04-13T20:09:37.463937459Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=25861780" Apr 13 20:09:37.464661 containerd[1472]: time="2026-04-13T20:09:37.464627959Z" level=info msg="ImageCreate event name:\"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:37.466634 containerd[1472]: time="2026-04-13T20:09:37.466611409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:37.467546 containerd[1472]: time="2026-04-13T20:09:37.467522359Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"25860793\" in 1.40078787s" Apr 13 20:09:37.467625 containerd[1472]: time="2026-04-13T20:09:37.467608509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\"" Apr 13 20:09:37.468414 containerd[1472]: time="2026-04-13T20:09:37.468382249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 20:09:38.066741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3710624477.mount: Deactivated successfully. Apr 13 20:09:38.879159 containerd[1472]: time="2026-04-13T20:09:38.877367149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:38.879159 containerd[1472]: time="2026-04-13T20:09:38.879032159Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Apr 13 20:09:38.879159 containerd[1472]: time="2026-04-13T20:09:38.879108259Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:38.883225 containerd[1472]: time="2026-04-13T20:09:38.883187919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:38.884795 containerd[1472]: time="2026-04-13T20:09:38.884762729Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.41620899s" Apr 13 20:09:38.884909 containerd[1472]: time="2026-04-13T20:09:38.884885379Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 13 20:09:38.886283 containerd[1472]: time="2026-04-13T20:09:38.886255009Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 20:09:39.452476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169131780.mount: Deactivated successfully. Apr 13 20:09:39.456510 containerd[1472]: time="2026-04-13T20:09:39.456468229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.457501 containerd[1472]: time="2026-04-13T20:09:39.457459369Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Apr 13 20:09:39.458135 containerd[1472]: time="2026-04-13T20:09:39.458075869Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.462045 containerd[1472]: time="2026-04-13T20:09:39.461998619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.463053 containerd[1472]: time="2026-04-13T20:09:39.462741279Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 576.45335ms" Apr 13 20:09:39.463053 containerd[1472]: time="2026-04-13T20:09:39.462770399Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 13 20:09:39.463792 containerd[1472]: time="2026-04-13T20:09:39.463768029Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 13 20:09:40.066690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1179431222.mount: Deactivated successfully. Apr 13 20:09:40.790011 containerd[1472]: time="2026-04-13T20:09:40.789948339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:40.792567 containerd[1472]: time="2026-04-13T20:09:40.792470829Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874237" Apr 13 20:09:40.793593 containerd[1472]: time="2026-04-13T20:09:40.793177089Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:40.796731 containerd[1472]: time="2026-04-13T20:09:40.796699389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:40.799252 containerd[1472]: time="2026-04-13T20:09:40.799220319Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.3354218s" Apr 13 20:09:40.799311 containerd[1472]: time="2026-04-13T20:09:40.799256199Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 13 20:09:40.967618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:09:40.978510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:41.148576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:41.157888 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:09:41.194394 kubelet[2090]: E0413 20:09:41.194352 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:09:41.198567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:09:41.198764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:09:43.737591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:43.744632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:43.772501 systemd[1]: Reloading requested from client PID 2104 ('systemctl') (unit session-7.scope)... Apr 13 20:09:43.772518 systemd[1]: Reloading... Apr 13 20:09:43.924448 zram_generator::config[2150]: No configuration found. Apr 13 20:09:44.038412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:44.114029 systemd[1]: Reloading finished in 341 ms. Apr 13 20:09:44.171804 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:09:44.172099 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:09:44.172398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:44.175672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:44.347900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:44.353869 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:09:44.390024 kubelet[2198]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:09:44.390024 kubelet[2198]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:09:44.390648 kubelet[2198]: I0413 20:09:44.390087 2198 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:09:44.774151 kubelet[2198]: I0413 20:09:44.774061 2198 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 20:09:44.774374 kubelet[2198]: I0413 20:09:44.774254 2198 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:09:44.776066 kubelet[2198]: I0413 20:09:44.776039 2198 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:09:44.776066 kubelet[2198]: I0413 20:09:44.776062 2198 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:09:44.776310 kubelet[2198]: I0413 20:09:44.776284 2198 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:09:44.781216 kubelet[2198]: E0413 20:09:44.781181 2198 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.193.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.193.191:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:09:44.781665 kubelet[2198]: I0413 20:09:44.781646 2198 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:09:44.786503 kubelet[2198]: E0413 20:09:44.786481 2198 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:09:44.786566 kubelet[2198]: I0413 20:09:44.786559 2198 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:09:44.790921 kubelet[2198]: I0413 20:09:44.790901 2198 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:09:44.794287 kubelet[2198]: I0413 20:09:44.792859 2198 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:09:44.794287 kubelet[2198]: I0413 20:09:44.792884 2198 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-193-191","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:09:44.794287 kubelet[2198]: I0413 20:09:44.793098 2198 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:09:44.794287 kubelet[2198]: I0413 20:09:44.793108 2198 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 20:09:44.794639 kubelet[2198]: I0413 20:09:44.793199 2198 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:09:44.795353 kubelet[2198]: I0413 20:09:44.795338 2198 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:09:44.795547 kubelet[2198]: I0413 20:09:44.795529 2198 kubelet.go:475] "Attempting to sync node with API server" Apr 13 20:09:44.795547 kubelet[2198]: I0413 20:09:44.795543 2198 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:09:44.795615 kubelet[2198]: I0413 20:09:44.795562 2198 kubelet.go:387] "Adding apiserver pod source" Apr 13 20:09:44.795615 kubelet[2198]: I0413 20:09:44.795575 2198 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:09:44.797525 kubelet[2198]: E0413 20:09:44.797490 2198 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.193.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.193.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:09:44.797761 kubelet[2198]: E0413 20:09:44.797720 2198 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.193.191:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-193-191&limit=500&resourceVersion=0\": dial tcp 172.239.193.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:09:44.799461 kubelet[2198]: I0413 20:09:44.797996 2198 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:09:44.799461 kubelet[2198]: I0413 20:09:44.798527 2198 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:09:44.799461 kubelet[2198]: I0413 20:09:44.798552 2198 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:09:44.799461 kubelet[2198]: W0413 20:09:44.798621 2198 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:09:44.801968 kubelet[2198]: I0413 20:09:44.801935 2198 server.go:1262] "Started kubelet" Apr 13 20:09:44.804443 kubelet[2198]: I0413 20:09:44.803363 2198 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:09:44.805734 kubelet[2198]: I0413 20:09:44.805707 2198 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:09:44.807336 kubelet[2198]: I0413 20:09:44.807313 2198 server.go:310] "Adding debug handlers to kubelet server" Apr 13 20:09:44.810770 kubelet[2198]: I0413 20:09:44.810707 2198 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:09:44.810770 kubelet[2198]: I0413 20:09:44.810755 2198 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:09:44.810955 kubelet[2198]: I0413 20:09:44.810935 2198 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:09:44.813041 kubelet[2198]: I0413 20:09:44.813017 2198 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:09:44.814073 kubelet[2198]: I0413 20:09:44.814057 2198 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:09:44.817559 kubelet[2198]: E0413 20:09:44.816500 2198 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.193.191:6443/api/v1/namespaces/default/events\": dial tcp 172.239.193.191:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-193-191.18a60387cac9974d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-193-191,UID:172-239-193-191,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-193-191,},FirstTimestamp:2026-04-13 20:09:44.801916749 +0000 UTC m=+0.443335181,LastTimestamp:2026-04-13 20:09:44.801916749 +0000 UTC m=+0.443335181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-193-191,}" Apr 13 20:09:44.817661 kubelet[2198]: I0413 20:09:44.817640 2198 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 20:09:44.817753 kubelet[2198]: I0413 20:09:44.817654 2198 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:09:44.817801 kubelet[2198]: E0413 20:09:44.817757 2198 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-191\" not found" Apr 13 20:09:44.817871 kubelet[2198]: I0413 20:09:44.817860 2198 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:09:44.818165 kubelet[2198]: E0413 20:09:44.818145 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-191?timeout=10s\": dial tcp 172.239.193.191:6443: connect: connection refused" interval="200ms" Apr 13 20:09:44.818645 kubelet[2198]: I0413 20:09:44.818451 2198 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:09:44.818645 kubelet[2198]: I0413 20:09:44.818521 2198 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:09:44.820349 kubelet[2198]: E0413 20:09:44.820334 2198 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:09:44.820788 kubelet[2198]: I0413 20:09:44.820774 2198 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:09:44.838407 kubelet[2198]: I0413 20:09:44.838369 2198 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:09:44.838407 kubelet[2198]: I0413 20:09:44.838396 2198 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 20:09:44.838407 kubelet[2198]: I0413 20:09:44.838414 2198 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 20:09:44.838603 kubelet[2198]: E0413 20:09:44.838505 2198 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:09:44.840950 kubelet[2198]: E0413 20:09:44.840847 2198 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.193.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.193.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:09:44.841020 kubelet[2198]: E0413 20:09:44.840998 2198 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.193.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.193.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:09:44.850296 kubelet[2198]: I0413 20:09:44.850281 2198 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:09:44.850373 kubelet[2198]: I0413 20:09:44.850362 2198 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:09:44.850445 kubelet[2198]: I0413 20:09:44.850417 2198 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:09:44.852490 kubelet[2198]: I0413 20:09:44.852471 2198 policy_none.go:49] "None policy: Start" Apr 13 20:09:44.852490 kubelet[2198]: I0413 20:09:44.852490 2198 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:09:44.852570 kubelet[2198]: I0413 20:09:44.852502 2198 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:09:44.853369 kubelet[2198]: I0413 20:09:44.853354 2198 policy_none.go:47] "Start" Apr 13 20:09:44.858791 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:09:44.876408 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:09:44.887338 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:09:44.888922 kubelet[2198]: E0413 20:09:44.888904 2198 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:09:44.889377 kubelet[2198]: I0413 20:09:44.889363 2198 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:09:44.889488 kubelet[2198]: I0413 20:09:44.889460 2198 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:09:44.889717 kubelet[2198]: I0413 20:09:44.889703 2198 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:09:44.891534 kubelet[2198]: E0413 20:09:44.891513 2198 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:09:44.891615 kubelet[2198]: E0413 20:09:44.891545 2198 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-193-191\" not found" Apr 13 20:09:44.949367 systemd[1]: Created slice kubepods-burstable-poda7cee3b909b3b9a186bd141c2b14ccbc.slice - libcontainer container kubepods-burstable-poda7cee3b909b3b9a186bd141c2b14ccbc.slice. Apr 13 20:09:44.963524 kubelet[2198]: E0413 20:09:44.963304 2198 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-191\" not found" node="172-239-193-191" Apr 13 20:09:44.967100 systemd[1]: Created slice kubepods-burstable-podf7c20ec7074c40bff147c2ea83ddc093.slice - libcontainer container kubepods-burstable-podf7c20ec7074c40bff147c2ea83ddc093.slice. Apr 13 20:09:44.968937 kubelet[2198]: E0413 20:09:44.968777 2198 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-191\" not found" node="172-239-193-191" Apr 13 20:09:44.970836 systemd[1]: Created slice kubepods-burstable-podb2bbfc1a4b128494f0ba34701a17b826.slice - libcontainer container kubepods-burstable-podb2bbfc1a4b128494f0ba34701a17b826.slice. Apr 13 20:09:44.972589 kubelet[2198]: E0413 20:09:44.972574 2198 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-191\" not found" node="172-239-193-191" Apr 13 20:09:44.991884 kubelet[2198]: I0413 20:09:44.991585 2198 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-191" Apr 13 20:09:44.991884 kubelet[2198]: E0413 20:09:44.991856 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.193.191:6443/api/v1/nodes\": dial tcp 172.239.193.191:6443: connect: connection refused" node="172-239-193-191" Apr 13 20:09:45.019165 kubelet[2198]: I0413 20:09:45.019140 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7cee3b909b3b9a186bd141c2b14ccbc-kubeconfig\") pod \"kube-scheduler-172-239-193-191\" (UID: \"a7cee3b909b3b9a186bd141c2b14ccbc\") " pod="kube-system/kube-scheduler-172-239-193-191" Apr 13 20:09:45.019247 kubelet[2198]: I0413 20:09:45.019201 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7c20ec7074c40bff147c2ea83ddc093-ca-certs\") pod \"kube-apiserver-172-239-193-191\" (UID: \"f7c20ec7074c40bff147c2ea83ddc093\") " pod="kube-system/kube-apiserver-172-239-193-191" Apr 13 20:09:45.019279 kubelet[2198]: I0413 20:09:45.019253 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7c20ec7074c40bff147c2ea83ddc093-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-193-191\" (UID: \"f7c20ec7074c40bff147c2ea83ddc093\") " pod="kube-system/kube-apiserver-172-239-193-191" Apr 13 20:09:45.019315 kubelet[2198]: I0413 20:09:45.019281 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b2bbfc1a4b128494f0ba34701a17b826-flexvolume-dir\") pod \"kube-controller-manager-172-239-193-191\" (UID: \"b2bbfc1a4b128494f0ba34701a17b826\") " pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:45.019315 kubelet[2198]: I0413 20:09:45.019301 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2bbfc1a4b128494f0ba34701a17b826-k8s-certs\") pod \"kube-controller-manager-172-239-193-191\" (UID: \"b2bbfc1a4b128494f0ba34701a17b826\") " pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:45.019366 kubelet[2198]: I0413 20:09:45.019327 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2bbfc1a4b128494f0ba34701a17b826-kubeconfig\") pod \"kube-controller-manager-172-239-193-191\" (UID: \"b2bbfc1a4b128494f0ba34701a17b826\") " pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:45.019366 kubelet[2198]: I0413 20:09:45.019347 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2bbfc1a4b128494f0ba34701a17b826-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-193-191\" (UID: \"b2bbfc1a4b128494f0ba34701a17b826\") " pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:45.019413 kubelet[2198]: I0413 20:09:45.019364 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7c20ec7074c40bff147c2ea83ddc093-k8s-certs\") pod \"kube-apiserver-172-239-193-191\" (UID: \"f7c20ec7074c40bff147c2ea83ddc093\") " pod="kube-system/kube-apiserver-172-239-193-191" Apr 13 20:09:45.019413 kubelet[2198]: I0413 20:09:45.019378 2198 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2bbfc1a4b128494f0ba34701a17b826-ca-certs\") pod \"kube-controller-manager-172-239-193-191\" (UID: \"b2bbfc1a4b128494f0ba34701a17b826\") " pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:45.019760 kubelet[2198]: E0413 20:09:45.019721 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-191?timeout=10s\": dial tcp 172.239.193.191:6443: connect: connection refused" interval="400ms" Apr 13 20:09:45.194459 kubelet[2198]: I0413 20:09:45.194285 2198 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-191" Apr 13 20:09:45.196450 kubelet[2198]: E0413 20:09:45.194589 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.193.191:6443/api/v1/nodes\": dial tcp 172.239.193.191:6443: connect: connection refused" node="172-239-193-191" Apr 13 20:09:45.265819 kubelet[2198]: E0413 20:09:45.265788 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:45.266789 containerd[1472]: time="2026-04-13T20:09:45.266759729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-193-191,Uid:a7cee3b909b3b9a186bd141c2b14ccbc,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:45.270305 kubelet[2198]: E0413 20:09:45.270143 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:45.270509 containerd[1472]: time="2026-04-13T20:09:45.270473649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-193-191,Uid:f7c20ec7074c40bff147c2ea83ddc093,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:45.275755 kubelet[2198]: E0413 20:09:45.275576 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:45.275912 containerd[1472]: time="2026-04-13T20:09:45.275890669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-193-191,Uid:b2bbfc1a4b128494f0ba34701a17b826,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:45.420764 kubelet[2198]: E0413 20:09:45.420714 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-191?timeout=10s\": dial tcp 172.239.193.191:6443: connect: connection refused" interval="800ms" Apr 13 20:09:45.596668 kubelet[2198]: I0413 20:09:45.596550 2198 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-191" Apr 13 20:09:45.596937 kubelet[2198]: E0413 20:09:45.596898 2198 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.193.191:6443/api/v1/nodes\": dial tcp 172.239.193.191:6443: connect: connection refused" node="172-239-193-191" Apr 13 20:09:45.794551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount33550136.mount: Deactivated successfully. Apr 13 20:09:45.799767 containerd[1472]: time="2026-04-13T20:09:45.799725109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:45.800902 containerd[1472]: time="2026-04-13T20:09:45.800815949Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:45.801738 containerd[1472]: time="2026-04-13T20:09:45.801683719Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 13 20:09:45.802797 containerd[1472]: time="2026-04-13T20:09:45.802708689Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:45.803615 containerd[1472]: time="2026-04-13T20:09:45.803481969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:09:45.808730 containerd[1472]: time="2026-04-13T20:09:45.808687849Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:45.811007 containerd[1472]: time="2026-04-13T20:09:45.810971239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:09:45.812179 containerd[1472]: time="2026-04-13T20:09:45.812116729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:45.815104 containerd[1472]: time="2026-04-13T20:09:45.814730919Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.19454ms" Apr 13 20:09:45.816052 containerd[1472]: time="2026-04-13T20:09:45.816027669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 540.09315ms" Apr 13 20:09:45.816408 containerd[1472]: time="2026-04-13T20:09:45.816379189Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.54595ms" Apr 13 20:09:45.935276 kubelet[2198]: E0413 20:09:45.934879 2198 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.193.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.193.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:09:45.943916 containerd[1472]: time="2026-04-13T20:09:45.943519799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:45.943916 containerd[1472]: time="2026-04-13T20:09:45.943566819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:45.943916 containerd[1472]: time="2026-04-13T20:09:45.943590539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:45.943916 containerd[1472]: time="2026-04-13T20:09:45.943682189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:45.946873 containerd[1472]: time="2026-04-13T20:09:45.945756669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:45.954763 containerd[1472]: time="2026-04-13T20:09:45.951717019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:45.954763 containerd[1472]: time="2026-04-13T20:09:45.951771459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:45.954763 containerd[1472]: time="2026-04-13T20:09:45.951791859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:45.954763 containerd[1472]: time="2026-04-13T20:09:45.951878849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:45.954928 containerd[1472]: time="2026-04-13T20:09:45.950457629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:45.954928 containerd[1472]: time="2026-04-13T20:09:45.950477709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:45.954928 containerd[1472]: time="2026-04-13T20:09:45.950564419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:45.975505 systemd[1]: Started cri-containerd-7044b3a669d0f2892e27e9da931ecc91ead524dfd5c5201db6607ba41da33998.scope - libcontainer container 7044b3a669d0f2892e27e9da931ecc91ead524dfd5c5201db6607ba41da33998. Apr 13 20:09:46.001606 systemd[1]: Started cri-containerd-f4262fc40b6181dd00a8d8bf4918eee7ee55e82ada322f2c7111434a3dd964d4.scope - libcontainer container f4262fc40b6181dd00a8d8bf4918eee7ee55e82ada322f2c7111434a3dd964d4. Apr 13 20:09:46.007013 systemd[1]: Started cri-containerd-a1ea079a85460812baa2fead36d23a13b105a7f1d22d8d9af86d7e6d386ee100.scope - libcontainer container a1ea079a85460812baa2fead36d23a13b105a7f1d22d8d9af86d7e6d386ee100. Apr 13 20:09:46.009807 kubelet[2198]: E0413 20:09:46.008334 2198 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.193.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.193.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:09:46.058934 containerd[1472]: time="2026-04-13T20:09:46.058898009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-193-191,Uid:f7c20ec7074c40bff147c2ea83ddc093,Namespace:kube-system,Attempt:0,} returns sandbox id \"7044b3a669d0f2892e27e9da931ecc91ead524dfd5c5201db6607ba41da33998\"" Apr 13 20:09:46.060394 kubelet[2198]: E0413 20:09:46.060201 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:46.065620 containerd[1472]: time="2026-04-13T20:09:46.065000039Z" level=info msg="CreateContainer within sandbox \"7044b3a669d0f2892e27e9da931ecc91ead524dfd5c5201db6607ba41da33998\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:09:46.082986 containerd[1472]: time="2026-04-13T20:09:46.082956389Z" level=info msg="CreateContainer within sandbox \"7044b3a669d0f2892e27e9da931ecc91ead524dfd5c5201db6607ba41da33998\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"15ce1386653aecabcf4d5d7686c892eb5e40494c472dcd797147d43a1ada58d7\"" Apr 13 20:09:46.084481 containerd[1472]: time="2026-04-13T20:09:46.083661059Z" level=info msg="StartContainer for \"15ce1386653aecabcf4d5d7686c892eb5e40494c472dcd797147d43a1ada58d7\"" Apr 13 20:09:46.087090 containerd[1472]: time="2026-04-13T20:09:46.087068479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-193-191,Uid:b2bbfc1a4b128494f0ba34701a17b826,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1ea079a85460812baa2fead36d23a13b105a7f1d22d8d9af86d7e6d386ee100\"" Apr 13 20:09:46.088096 kubelet[2198]: E0413 20:09:46.088075 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:46.091115 containerd[1472]: time="2026-04-13T20:09:46.091085519Z" level=info msg="CreateContainer within sandbox \"a1ea079a85460812baa2fead36d23a13b105a7f1d22d8d9af86d7e6d386ee100\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:09:46.101062 containerd[1472]: time="2026-04-13T20:09:46.101020609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-193-191,Uid:a7cee3b909b3b9a186bd141c2b14ccbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4262fc40b6181dd00a8d8bf4918eee7ee55e82ada322f2c7111434a3dd964d4\"" Apr 13 20:09:46.101775 kubelet[2198]: E0413 20:09:46.101747 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:46.116722 containerd[1472]: time="2026-04-13T20:09:46.116689959Z" level=info msg="CreateContainer within sandbox \"f4262fc40b6181dd00a8d8bf4918eee7ee55e82ada322f2c7111434a3dd964d4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:09:46.117107 containerd[1472]: time="2026-04-13T20:09:46.117072959Z" level=info msg="CreateContainer within sandbox \"a1ea079a85460812baa2fead36d23a13b105a7f1d22d8d9af86d7e6d386ee100\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8f5e0e4fd91ae99f9c8ea2b3726dd47c92b6724c567434f57d417c9b5edda7e8\"" Apr 13 20:09:46.117535 containerd[1472]: time="2026-04-13T20:09:46.117506369Z" level=info msg="StartContainer for \"8f5e0e4fd91ae99f9c8ea2b3726dd47c92b6724c567434f57d417c9b5edda7e8\"" Apr 13 20:09:46.119567 systemd[1]: Started cri-containerd-15ce1386653aecabcf4d5d7686c892eb5e40494c472dcd797147d43a1ada58d7.scope - libcontainer container 15ce1386653aecabcf4d5d7686c892eb5e40494c472dcd797147d43a1ada58d7. Apr 13 20:09:46.140178 containerd[1472]: time="2026-04-13T20:09:46.140133469Z" level=info msg="CreateContainer within sandbox \"f4262fc40b6181dd00a8d8bf4918eee7ee55e82ada322f2c7111434a3dd964d4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92d5eeab8b204183522a83a2d4450cce0bb3dfa76f864e2982636a4b2cc64028\"" Apr 13 20:09:46.141635 containerd[1472]: time="2026-04-13T20:09:46.140763799Z" level=info msg="StartContainer for \"92d5eeab8b204183522a83a2d4450cce0bb3dfa76f864e2982636a4b2cc64028\"" Apr 13 20:09:46.159402 kubelet[2198]: E0413 20:09:46.159375 2198 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.193.191:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-193-191&limit=500&resourceVersion=0\": dial tcp 172.239.193.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:09:46.159560 systemd[1]: Started cri-containerd-8f5e0e4fd91ae99f9c8ea2b3726dd47c92b6724c567434f57d417c9b5edda7e8.scope - libcontainer container 8f5e0e4fd91ae99f9c8ea2b3726dd47c92b6724c567434f57d417c9b5edda7e8. Apr 13 20:09:46.185851 systemd[1]: Started cri-containerd-92d5eeab8b204183522a83a2d4450cce0bb3dfa76f864e2982636a4b2cc64028.scope - libcontainer container 92d5eeab8b204183522a83a2d4450cce0bb3dfa76f864e2982636a4b2cc64028. Apr 13 20:09:46.221787 kubelet[2198]: E0413 20:09:46.221731 2198 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-191?timeout=10s\": dial tcp 172.239.193.191:6443: connect: connection refused" interval="1.6s" Apr 13 20:09:46.222468 containerd[1472]: time="2026-04-13T20:09:46.222228569Z" level=info msg="StartContainer for \"15ce1386653aecabcf4d5d7686c892eb5e40494c472dcd797147d43a1ada58d7\" returns successfully" Apr 13 20:09:46.243158 containerd[1472]: time="2026-04-13T20:09:46.242030879Z" level=info msg="StartContainer for \"8f5e0e4fd91ae99f9c8ea2b3726dd47c92b6724c567434f57d417c9b5edda7e8\" returns successfully" Apr 13 20:09:46.280618 containerd[1472]: time="2026-04-13T20:09:46.280487819Z" level=info msg="StartContainer for \"92d5eeab8b204183522a83a2d4450cce0bb3dfa76f864e2982636a4b2cc64028\" returns successfully" Apr 13 20:09:46.294749 kubelet[2198]: E0413 20:09:46.294681 2198 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.193.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.193.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:09:46.399470 kubelet[2198]: I0413 20:09:46.398798 2198 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-191" Apr 13 20:09:46.860033 kubelet[2198]: E0413 20:09:46.859856 2198 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-191\" not found" node="172-239-193-191" Apr 13 20:09:46.864338 kubelet[2198]: E0413 20:09:46.863664 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:46.866731 kubelet[2198]: E0413 20:09:46.866702 2198 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-191\" not found" node="172-239-193-191" Apr 13 20:09:46.867660 kubelet[2198]: E0413 20:09:46.867641 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:46.871441 kubelet[2198]: E0413 20:09:46.870389 2198 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-191\" not found" node="172-239-193-191" Apr 13 20:09:46.871894 kubelet[2198]: E0413 20:09:46.871879 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:47.868518 kubelet[2198]: E0413 20:09:47.866903 2198 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-191\" not found" node="172-239-193-191" Apr 13 20:09:47.868518 kubelet[2198]: E0413 20:09:47.867019 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:47.870415 kubelet[2198]: E0413 20:09:47.869858 2198 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-191\" not found" node="172-239-193-191" Apr 13 20:09:47.870415 kubelet[2198]: E0413 20:09:47.870346 2198 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:47.973730 kubelet[2198]: E0413 20:09:47.973686 2198 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-193-191\" not found" node="172-239-193-191" Apr 13 20:09:48.074593 kubelet[2198]: I0413 20:09:48.074467 2198 kubelet_node_status.go:78] "Successfully registered node" node="172-239-193-191" Apr 13 20:09:48.074593 kubelet[2198]: E0413 20:09:48.074495 2198 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-239-193-191\": node \"172-239-193-191\" not found" Apr 13 20:09:48.089686 kubelet[2198]: E0413 20:09:48.089664 2198 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-191\" not found" Apr 13 20:09:48.133747 kubelet[2198]: E0413 20:09:48.133404 2198 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{172-239-193-191.18a60387cac9974d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-193-191,UID:172-239-193-191,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-193-191,},FirstTimestamp:2026-04-13 20:09:44.801916749 +0000 UTC m=+0.443335181,LastTimestamp:2026-04-13 20:09:44.801916749 +0000 UTC m=+0.443335181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-193-191,}" Apr 13 20:09:48.190346 kubelet[2198]: E0413 20:09:48.190293 2198 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-191\" not found" Apr 13 20:09:48.290593 kubelet[2198]: E0413 20:09:48.290553 2198 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-191\" not found" Apr 13 20:09:48.391652 kubelet[2198]: E0413 20:09:48.391531 2198 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-191\" not found" Apr 13 20:09:48.492397 kubelet[2198]: E0413 20:09:48.492337 2198 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-191\" not found" Apr 13 20:09:48.593373 kubelet[2198]: E0413 20:09:48.593331 2198 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-191\" not found" Apr 13 20:09:48.694570 kubelet[2198]: E0413 20:09:48.694455 2198 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-191\" not found" Apr 13 20:09:48.703869 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 20:09:48.718106 kubelet[2198]: I0413 20:09:48.718037 2198 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-191" Apr 13 20:09:48.726371 kubelet[2198]: E0413 20:09:48.726338 2198 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-193-191\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-193-191" Apr 13 20:09:48.726371 kubelet[2198]: I0413 20:09:48.726371 2198 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:48.730434 kubelet[2198]: E0413 20:09:48.729257 2198 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-193-191\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:48.730434 kubelet[2198]: I0413 20:09:48.729281 2198 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-191" Apr 13 20:09:48.731741 kubelet[2198]: E0413 20:09:48.731717 2198 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-193-191\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-193-191" Apr 13 20:09:48.798806 kubelet[2198]: I0413 20:09:48.798775 2198 apiserver.go:52] "Watching apiserver" Apr 13 20:09:48.818503 kubelet[2198]: I0413 20:09:48.818476 2198 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:09:49.808364 systemd[1]: Reloading requested from client PID 2489 ('systemctl') (unit session-7.scope)... Apr 13 20:09:49.808382 systemd[1]: Reloading... Apr 13 20:09:49.889448 zram_generator::config[2528]: No configuration found. Apr 13 20:09:50.025862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:50.117012 systemd[1]: Reloading finished in 308 ms. Apr 13 20:09:50.165358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:50.184674 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:09:50.184962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:50.190680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:50.358051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:50.368992 (kubelet)[2578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:09:50.406990 kubelet[2578]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:09:50.407352 kubelet[2578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:09:50.407509 kubelet[2578]: I0413 20:09:50.407481 2578 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:09:50.415251 kubelet[2578]: I0413 20:09:50.415231 2578 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 20:09:50.415354 kubelet[2578]: I0413 20:09:50.415343 2578 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:09:50.415418 kubelet[2578]: I0413 20:09:50.415409 2578 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:09:50.415497 kubelet[2578]: I0413 20:09:50.415486 2578 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:09:50.415691 kubelet[2578]: I0413 20:09:50.415678 2578 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:09:50.416729 kubelet[2578]: I0413 20:09:50.416714 2578 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:09:50.419446 kubelet[2578]: I0413 20:09:50.419414 2578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:09:50.423726 kubelet[2578]: E0413 20:09:50.423697 2578 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:09:50.423829 kubelet[2578]: I0413 20:09:50.423746 2578 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:09:50.430550 kubelet[2578]: I0413 20:09:50.430503 2578 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:09:50.430761 kubelet[2578]: I0413 20:09:50.430724 2578 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:09:50.430916 kubelet[2578]: I0413 20:09:50.430758 2578 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-193-191","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:09:50.430991 kubelet[2578]: I0413 20:09:50.430915 2578 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:09:50.430991 kubelet[2578]: I0413 20:09:50.430924 2578 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 20:09:50.430991 kubelet[2578]: I0413 20:09:50.430954 2578 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:09:50.431138 kubelet[2578]: I0413 20:09:50.431123 2578 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:09:50.431305 kubelet[2578]: I0413 20:09:50.431291 2578 kubelet.go:475] "Attempting to sync node with API server" Apr 13 20:09:50.431333 kubelet[2578]: I0413 20:09:50.431315 2578 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:09:50.431371 kubelet[2578]: I0413 20:09:50.431335 2578 kubelet.go:387] "Adding apiserver pod source" Apr 13 20:09:50.431371 kubelet[2578]: I0413 20:09:50.431358 2578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:09:50.434514 kubelet[2578]: I0413 20:09:50.434484 2578 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:09:50.435174 kubelet[2578]: I0413 20:09:50.435153 2578 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:09:50.435266 kubelet[2578]: I0413 20:09:50.435254 2578 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:09:50.438557 kubelet[2578]: I0413 20:09:50.438543 2578 server.go:1262] "Started kubelet" Apr 13 20:09:50.442071 kubelet[2578]: I0413 20:09:50.441289 2578 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:09:50.442311 kubelet[2578]: I0413 20:09:50.442277 2578 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:09:50.442657 kubelet[2578]: I0413 20:09:50.442636 2578 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:09:50.446275 kubelet[2578]: I0413 20:09:50.445587 2578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:09:50.454442 kubelet[2578]: I0413 20:09:50.453480 2578 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:09:50.454523 kubelet[2578]: I0413 20:09:50.454473 2578 server.go:310] "Adding debug handlers to kubelet server" Apr 13 20:09:50.455227 kubelet[2578]: I0413 20:09:50.455207 2578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:09:50.458807 kubelet[2578]: I0413 20:09:50.457175 2578 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 20:09:50.458807 kubelet[2578]: E0413 20:09:50.457296 2578 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-193-191\" not found" Apr 13 20:09:50.458807 kubelet[2578]: I0413 20:09:50.457806 2578 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:09:50.458807 kubelet[2578]: I0413 20:09:50.457978 2578 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:09:50.462237 kubelet[2578]: I0413 20:09:50.462182 2578 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:09:50.474831 kubelet[2578]: E0413 20:09:50.474781 2578 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:09:50.475543 kubelet[2578]: I0413 20:09:50.475495 2578 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:09:50.475543 kubelet[2578]: I0413 20:09:50.475513 2578 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:09:50.475622 kubelet[2578]: I0413 20:09:50.475578 2578 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:09:50.476882 kubelet[2578]: I0413 20:09:50.476670 2578 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:09:50.476882 kubelet[2578]: I0413 20:09:50.476687 2578 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 20:09:50.476882 kubelet[2578]: I0413 20:09:50.476731 2578 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 20:09:50.476882 kubelet[2578]: E0413 20:09:50.476772 2578 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:09:50.534671 kubelet[2578]: I0413 20:09:50.533489 2578 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:09:50.534671 kubelet[2578]: I0413 20:09:50.533506 2578 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:09:50.534671 kubelet[2578]: I0413 20:09:50.533522 2578 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:09:50.534671 kubelet[2578]: I0413 20:09:50.533636 2578 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:09:50.534671 kubelet[2578]: I0413 20:09:50.533644 2578 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:09:50.534671 kubelet[2578]: I0413 20:09:50.533660 2578 policy_none.go:49] "None policy: Start" Apr 13 20:09:50.534671 kubelet[2578]: I0413 20:09:50.533669 2578 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:09:50.534671 kubelet[2578]: I0413 20:09:50.533679 2578 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:09:50.534671 kubelet[2578]: I0413 20:09:50.533756 2578 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 20:09:50.534671 kubelet[2578]: I0413 20:09:50.533763 2578 policy_none.go:47] "Start" Apr 13 20:09:50.539459 kubelet[2578]: E0413 20:09:50.539443 2578 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:09:50.539689 kubelet[2578]: I0413 20:09:50.539679 2578 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:09:50.539755 kubelet[2578]: I0413 20:09:50.539733 2578 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:09:50.539976 kubelet[2578]: I0413 20:09:50.539963 2578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:09:50.541060 kubelet[2578]: E0413 20:09:50.541044 2578 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:09:50.577752 kubelet[2578]: I0413 20:09:50.577713 2578 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:50.578053 kubelet[2578]: I0413 20:09:50.578019 2578 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-191" Apr 13 20:09:50.578461 kubelet[2578]: I0413 20:09:50.577728 2578 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-191" Apr 13 20:09:50.641847 kubelet[2578]: I0413 20:09:50.641756 2578 kubelet_node_status.go:75] "Attempting to register node" node="172-239-193-191" Apr 13 20:09:50.650283 kubelet[2578]: I0413 20:09:50.650212 2578 kubelet_node_status.go:124] "Node was previously registered" node="172-239-193-191" Apr 13 20:09:50.650820 kubelet[2578]: I0413 20:09:50.650326 2578 kubelet_node_status.go:78] "Successfully registered node" node="172-239-193-191" Apr 13 20:09:50.658275 kubelet[2578]: I0413 20:09:50.658213 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7cee3b909b3b9a186bd141c2b14ccbc-kubeconfig\") pod \"kube-scheduler-172-239-193-191\" (UID: \"a7cee3b909b3b9a186bd141c2b14ccbc\") " pod="kube-system/kube-scheduler-172-239-193-191" Apr 13 20:09:50.658275 kubelet[2578]: I0413 20:09:50.658251 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7c20ec7074c40bff147c2ea83ddc093-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-193-191\" (UID: \"f7c20ec7074c40bff147c2ea83ddc093\") " pod="kube-system/kube-apiserver-172-239-193-191" Apr 13 20:09:50.658275 kubelet[2578]: I0413 20:09:50.658265 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b2bbfc1a4b128494f0ba34701a17b826-flexvolume-dir\") pod \"kube-controller-manager-172-239-193-191\" (UID: \"b2bbfc1a4b128494f0ba34701a17b826\") " pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:50.658470 kubelet[2578]: I0413 20:09:50.658286 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2bbfc1a4b128494f0ba34701a17b826-k8s-certs\") pod \"kube-controller-manager-172-239-193-191\" (UID: \"b2bbfc1a4b128494f0ba34701a17b826\") " pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:50.658470 kubelet[2578]: I0413 20:09:50.658307 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7c20ec7074c40bff147c2ea83ddc093-ca-certs\") pod \"kube-apiserver-172-239-193-191\" (UID: \"f7c20ec7074c40bff147c2ea83ddc093\") " pod="kube-system/kube-apiserver-172-239-193-191" Apr 13 20:09:50.658470 kubelet[2578]: I0413 20:09:50.658324 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7c20ec7074c40bff147c2ea83ddc093-k8s-certs\") pod \"kube-apiserver-172-239-193-191\" (UID: \"f7c20ec7074c40bff147c2ea83ddc093\") " pod="kube-system/kube-apiserver-172-239-193-191" Apr 13 20:09:50.658470 kubelet[2578]: I0413 20:09:50.658341 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2bbfc1a4b128494f0ba34701a17b826-ca-certs\") pod \"kube-controller-manager-172-239-193-191\" (UID: \"b2bbfc1a4b128494f0ba34701a17b826\") " pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:50.658470 kubelet[2578]: I0413 20:09:50.658356 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2bbfc1a4b128494f0ba34701a17b826-kubeconfig\") pod \"kube-controller-manager-172-239-193-191\" (UID: \"b2bbfc1a4b128494f0ba34701a17b826\") " pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:50.658912 kubelet[2578]: I0413 20:09:50.658372 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2bbfc1a4b128494f0ba34701a17b826-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-193-191\" (UID: \"b2bbfc1a4b128494f0ba34701a17b826\") " pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:50.886721 kubelet[2578]: E0413 20:09:50.885986 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:50.888437 kubelet[2578]: E0413 20:09:50.887644 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:50.888437 kubelet[2578]: E0413 20:09:50.887937 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:51.433393 kubelet[2578]: I0413 20:09:51.433325 2578 apiserver.go:52] "Watching apiserver" Apr 13 20:09:51.459365 kubelet[2578]: I0413 20:09:51.459324 2578 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:09:51.504481 kubelet[2578]: E0413 20:09:51.504374 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:51.505104 kubelet[2578]: E0413 20:09:51.505085 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:51.505709 kubelet[2578]: I0413 20:09:51.505398 2578 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:51.521440 kubelet[2578]: E0413 20:09:51.521330 2578 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-193-191\" already exists" pod="kube-system/kube-controller-manager-172-239-193-191" Apr 13 20:09:51.522797 kubelet[2578]: E0413 20:09:51.522633 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:51.552449 kubelet[2578]: I0413 20:09:51.551904 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-193-191" podStartSLOduration=1.551891972 podStartE2EDuration="1.551891972s" podCreationTimestamp="2026-04-13 20:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:51.542736226 +0000 UTC m=+1.167271028" watchObservedRunningTime="2026-04-13 20:09:51.551891972 +0000 UTC m=+1.176426774" Apr 13 20:09:51.562375 kubelet[2578]: I0413 20:09:51.562323 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-193-191" podStartSLOduration=1.562310834 podStartE2EDuration="1.562310834s" podCreationTimestamp="2026-04-13 20:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:51.553159038 +0000 UTC m=+1.177693840" watchObservedRunningTime="2026-04-13 20:09:51.562310834 +0000 UTC m=+1.186845626" Apr 13 20:09:52.506275 kubelet[2578]: E0413 20:09:52.505886 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:52.506275 kubelet[2578]: E0413 20:09:52.506021 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:52.506275 kubelet[2578]: E0413 20:09:52.506217 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:53.507434 kubelet[2578]: E0413 20:09:53.507392 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:56.268393 kubelet[2578]: E0413 20:09:56.268366 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:56.282299 kubelet[2578]: I0413 20:09:56.282019 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-193-191" podStartSLOduration=6.282005565 podStartE2EDuration="6.282005565s" podCreationTimestamp="2026-04-13 20:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:51.56358605 +0000 UTC m=+1.188120842" watchObservedRunningTime="2026-04-13 20:09:56.282005565 +0000 UTC m=+5.906540367" Apr 13 20:09:56.512281 kubelet[2578]: E0413 20:09:56.512251 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:56.791043 kubelet[2578]: E0413 20:09:56.790966 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:56.823337 kubelet[2578]: I0413 20:09:56.823311 2578 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:09:56.823620 containerd[1472]: time="2026-04-13T20:09:56.823582005Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:09:56.823984 kubelet[2578]: I0413 20:09:56.823728 2578 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:09:57.516117 kubelet[2578]: E0413 20:09:57.514593 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:57.516117 kubelet[2578]: E0413 20:09:57.515051 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:57.609910 systemd[1]: Created slice kubepods-besteffort-pod023f415e_1754_476b_b5f8_04c37efeeb4c.slice - libcontainer container kubepods-besteffort-pod023f415e_1754_476b_b5f8_04c37efeeb4c.slice. Apr 13 20:09:57.705229 kubelet[2578]: I0413 20:09:57.704879 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/023f415e-1754-476b-b5f8-04c37efeeb4c-kube-proxy\") pod \"kube-proxy-d9grb\" (UID: \"023f415e-1754-476b-b5f8-04c37efeeb4c\") " pod="kube-system/kube-proxy-d9grb" Apr 13 20:09:57.705229 kubelet[2578]: I0413 20:09:57.704935 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/023f415e-1754-476b-b5f8-04c37efeeb4c-xtables-lock\") pod \"kube-proxy-d9grb\" (UID: \"023f415e-1754-476b-b5f8-04c37efeeb4c\") " pod="kube-system/kube-proxy-d9grb" Apr 13 20:09:57.705229 kubelet[2578]: I0413 20:09:57.704985 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dtf2\" (UniqueName: \"kubernetes.io/projected/023f415e-1754-476b-b5f8-04c37efeeb4c-kube-api-access-2dtf2\") pod \"kube-proxy-d9grb\" (UID: \"023f415e-1754-476b-b5f8-04c37efeeb4c\") " pod="kube-system/kube-proxy-d9grb" Apr 13 20:09:57.705229 kubelet[2578]: I0413 20:09:57.705200 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/023f415e-1754-476b-b5f8-04c37efeeb4c-lib-modules\") pod \"kube-proxy-d9grb\" (UID: \"023f415e-1754-476b-b5f8-04c37efeeb4c\") " pod="kube-system/kube-proxy-d9grb" Apr 13 20:09:57.921096 kubelet[2578]: E0413 20:09:57.920780 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:57.921948 containerd[1472]: time="2026-04-13T20:09:57.921914744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9grb,Uid:023f415e-1754-476b-b5f8-04c37efeeb4c,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:57.943997 containerd[1472]: time="2026-04-13T20:09:57.943760688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:57.943997 containerd[1472]: time="2026-04-13T20:09:57.943822947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:57.943997 containerd[1472]: time="2026-04-13T20:09:57.943842857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:57.943997 containerd[1472]: time="2026-04-13T20:09:57.943917575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:57.968546 systemd[1]: Started cri-containerd-62ea6a456b9a6a55952faafadb33560646eca7fc5c552f3a33e3cf217b5144f0.scope - libcontainer container 62ea6a456b9a6a55952faafadb33560646eca7fc5c552f3a33e3cf217b5144f0. Apr 13 20:09:57.993450 containerd[1472]: time="2026-04-13T20:09:57.992684553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9grb,Uid:023f415e-1754-476b-b5f8-04c37efeeb4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"62ea6a456b9a6a55952faafadb33560646eca7fc5c552f3a33e3cf217b5144f0\"" Apr 13 20:09:57.993859 kubelet[2578]: E0413 20:09:57.993832 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:57.997808 containerd[1472]: time="2026-04-13T20:09:57.997782991Z" level=info msg="CreateContainer within sandbox \"62ea6a456b9a6a55952faafadb33560646eca7fc5c552f3a33e3cf217b5144f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:09:58.020035 containerd[1472]: time="2026-04-13T20:09:58.019999821Z" level=info msg="CreateContainer within sandbox \"62ea6a456b9a6a55952faafadb33560646eca7fc5c552f3a33e3cf217b5144f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e940eeb727c68ffac34b78f81fc7653d47d4c486d97a65ff045201e67b27607d\"" Apr 13 20:09:58.020694 containerd[1472]: time="2026-04-13T20:09:58.020674730Z" level=info msg="StartContainer for \"e940eeb727c68ffac34b78f81fc7653d47d4c486d97a65ff045201e67b27607d\"" Apr 13 20:09:58.061552 systemd[1]: Started cri-containerd-e940eeb727c68ffac34b78f81fc7653d47d4c486d97a65ff045201e67b27607d.scope - libcontainer container e940eeb727c68ffac34b78f81fc7653d47d4c486d97a65ff045201e67b27607d. Apr 13 20:09:58.062460 systemd[1]: Created slice kubepods-besteffort-pod53599f73_d6ac_4619_8945_c1536a9153f5.slice - libcontainer container kubepods-besteffort-pod53599f73_d6ac_4619_8945_c1536a9153f5.slice. Apr 13 20:09:58.095722 containerd[1472]: time="2026-04-13T20:09:58.095682328Z" level=info msg="StartContainer for \"e940eeb727c68ffac34b78f81fc7653d47d4c486d97a65ff045201e67b27607d\" returns successfully" Apr 13 20:09:58.107091 kubelet[2578]: I0413 20:09:58.107049 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h28r\" (UniqueName: \"kubernetes.io/projected/53599f73-d6ac-4619-8945-c1536a9153f5-kube-api-access-5h28r\") pod \"tigera-operator-5588576f44-q8j4v\" (UID: \"53599f73-d6ac-4619-8945-c1536a9153f5\") " pod="tigera-operator/tigera-operator-5588576f44-q8j4v" Apr 13 20:09:58.107091 kubelet[2578]: I0413 20:09:58.107086 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53599f73-d6ac-4619-8945-c1536a9153f5-var-lib-calico\") pod \"tigera-operator-5588576f44-q8j4v\" (UID: \"53599f73-d6ac-4619-8945-c1536a9153f5\") " pod="tigera-operator/tigera-operator-5588576f44-q8j4v" Apr 13 20:09:58.369415 containerd[1472]: time="2026-04-13T20:09:58.368596999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-q8j4v,Uid:53599f73-d6ac-4619-8945-c1536a9153f5,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:09:58.388971 containerd[1472]: time="2026-04-13T20:09:58.388793167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:58.388971 containerd[1472]: time="2026-04-13T20:09:58.388838176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:58.388971 containerd[1472]: time="2026-04-13T20:09:58.388851806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:58.388971 containerd[1472]: time="2026-04-13T20:09:58.388914105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:58.410565 systemd[1]: Started cri-containerd-f90b779d522facd8e2a04f0bf51a5f295fb9c5b5eee632edb7bcc0394d8c3b9c.scope - libcontainer container f90b779d522facd8e2a04f0bf51a5f295fb9c5b5eee632edb7bcc0394d8c3b9c. Apr 13 20:09:58.450745 containerd[1472]: time="2026-04-13T20:09:58.450061018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-q8j4v,Uid:53599f73-d6ac-4619-8945-c1536a9153f5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f90b779d522facd8e2a04f0bf51a5f295fb9c5b5eee632edb7bcc0394d8c3b9c\"" Apr 13 20:09:58.453832 containerd[1472]: time="2026-04-13T20:09:58.453627368Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:09:58.535491 kubelet[2578]: E0413 20:09:58.534947 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:58.535847 kubelet[2578]: E0413 20:09:58.534987 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:09:59.123745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547451548.mount: Deactivated successfully. Apr 13 20:09:59.955874 containerd[1472]: time="2026-04-13T20:09:59.955801125Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:59.956805 containerd[1472]: time="2026-04-13T20:09:59.956773880Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:09:59.957335 containerd[1472]: time="2026-04-13T20:09:59.957283752Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:59.959063 containerd[1472]: time="2026-04-13T20:09:59.959025954Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:59.960209 containerd[1472]: time="2026-04-13T20:09:59.959761922Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.506108075s" Apr 13 20:09:59.960209 containerd[1472]: time="2026-04-13T20:09:59.959789452Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:09:59.963975 containerd[1472]: time="2026-04-13T20:09:59.963918866Z" level=info msg="CreateContainer within sandbox \"f90b779d522facd8e2a04f0bf51a5f295fb9c5b5eee632edb7bcc0394d8c3b9c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:09:59.987547 containerd[1472]: time="2026-04-13T20:09:59.987521581Z" level=info msg="CreateContainer within sandbox \"f90b779d522facd8e2a04f0bf51a5f295fb9c5b5eee632edb7bcc0394d8c3b9c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2b52882ddc1145ea5b2480d057c0115111cda11b8282197753f51ab731f51416\"" Apr 13 20:09:59.987965 containerd[1472]: time="2026-04-13T20:09:59.987924084Z" level=info msg="StartContainer for \"2b52882ddc1145ea5b2480d057c0115111cda11b8282197753f51ab731f51416\"" Apr 13 20:10:00.014236 systemd[1]: run-containerd-runc-k8s.io-2b52882ddc1145ea5b2480d057c0115111cda11b8282197753f51ab731f51416-runc.dgdSLC.mount: Deactivated successfully. Apr 13 20:10:00.021582 systemd[1]: Started cri-containerd-2b52882ddc1145ea5b2480d057c0115111cda11b8282197753f51ab731f51416.scope - libcontainer container 2b52882ddc1145ea5b2480d057c0115111cda11b8282197753f51ab731f51416. Apr 13 20:10:00.051063 containerd[1472]: time="2026-04-13T20:10:00.050948573Z" level=info msg="StartContainer for \"2b52882ddc1145ea5b2480d057c0115111cda11b8282197753f51ab731f51416\" returns successfully" Apr 13 20:10:00.546471 kubelet[2578]: I0413 20:10:00.546401 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d9grb" podStartSLOduration=3.5463879780000003 podStartE2EDuration="3.546387978s" podCreationTimestamp="2026-04-13 20:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:58.545413241 +0000 UTC m=+8.169948033" watchObservedRunningTime="2026-04-13 20:10:00.546387978 +0000 UTC m=+10.170922770" Apr 13 20:10:00.546912 kubelet[2578]: I0413 20:10:00.546519 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-q8j4v" podStartSLOduration=1.037311173 podStartE2EDuration="2.546514236s" podCreationTimestamp="2026-04-13 20:09:58 +0000 UTC" firstStartedPulling="2026-04-13 20:09:58.451624552 +0000 UTC m=+8.076159344" lastFinishedPulling="2026-04-13 20:09:59.960827615 +0000 UTC m=+9.585362407" observedRunningTime="2026-04-13 20:10:00.545876325 +0000 UTC m=+10.170411117" watchObservedRunningTime="2026-04-13 20:10:00.546514236 +0000 UTC m=+10.171049028" Apr 13 20:10:02.977746 update_engine[1465]: I20260413 20:10:02.977665 1465 update_attempter.cc:509] Updating boot flags... Apr 13 20:10:03.101456 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2965) Apr 13 20:10:03.237473 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2961) Apr 13 20:10:03.495649 kubelet[2578]: E0413 20:10:03.495599 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:05.677725 sudo[1700]: pam_unix(sudo:session): session closed for user root Apr 13 20:10:05.796534 sshd[1697]: pam_unix(sshd:session): session closed for user core Apr 13 20:10:05.801384 systemd[1]: sshd@6-172.239.193.191:22-50.85.169.122:47058.service: Deactivated successfully. Apr 13 20:10:05.805687 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:10:05.806905 systemd[1]: session-7.scope: Consumed 5.067s CPU time, 157.6M memory peak, 0B memory swap peak. Apr 13 20:10:05.810834 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:10:05.811997 systemd-logind[1464]: Removed session 7. Apr 13 20:10:08.235621 systemd[1]: Created slice kubepods-besteffort-pod7033d77a_c4a0_42a3_94e7_484ba423d29a.slice - libcontainer container kubepods-besteffort-pod7033d77a_c4a0_42a3_94e7_484ba423d29a.slice. Apr 13 20:10:08.285947 kubelet[2578]: I0413 20:10:08.285819 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7033d77a-c4a0-42a3-94e7-484ba423d29a-tigera-ca-bundle\") pod \"calico-typha-8c95ffc4b-rtqw8\" (UID: \"7033d77a-c4a0-42a3-94e7-484ba423d29a\") " pod="calico-system/calico-typha-8c95ffc4b-rtqw8" Apr 13 20:10:08.287364 kubelet[2578]: I0413 20:10:08.286001 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl5bn\" (UniqueName: \"kubernetes.io/projected/7033d77a-c4a0-42a3-94e7-484ba423d29a-kube-api-access-sl5bn\") pod \"calico-typha-8c95ffc4b-rtqw8\" (UID: \"7033d77a-c4a0-42a3-94e7-484ba423d29a\") " pod="calico-system/calico-typha-8c95ffc4b-rtqw8" Apr 13 20:10:08.287364 kubelet[2578]: I0413 20:10:08.286168 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7033d77a-c4a0-42a3-94e7-484ba423d29a-typha-certs\") pod \"calico-typha-8c95ffc4b-rtqw8\" (UID: \"7033d77a-c4a0-42a3-94e7-484ba423d29a\") " pod="calico-system/calico-typha-8c95ffc4b-rtqw8" Apr 13 20:10:08.327321 systemd[1]: Created slice kubepods-besteffort-podce4eff57_8a87_40fc_9408_ee2668c17617.slice - libcontainer container kubepods-besteffort-podce4eff57_8a87_40fc_9408_ee2668c17617.slice. Apr 13 20:10:08.386490 kubelet[2578]: I0413 20:10:08.386307 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-flexvol-driver-host\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.386490 kubelet[2578]: I0413 20:10:08.386401 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-nodeproc\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.386490 kubelet[2578]: I0413 20:10:08.386464 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-policysync\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.386490 kubelet[2578]: I0413 20:10:08.386493 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-var-lib-calico\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.386490 kubelet[2578]: I0413 20:10:08.386523 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-cni-net-dir\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.387848 kubelet[2578]: I0413 20:10:08.387149 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-bpffs\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.387848 kubelet[2578]: I0413 20:10:08.387180 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-cni-log-dir\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.387848 kubelet[2578]: I0413 20:10:08.387222 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-xtables-lock\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.387848 kubelet[2578]: I0413 20:10:08.387247 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgndg\" (UniqueName: \"kubernetes.io/projected/ce4eff57-8a87-40fc-9408-ee2668c17617-kube-api-access-cgndg\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.387848 kubelet[2578]: I0413 20:10:08.387272 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-lib-modules\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.388081 kubelet[2578]: I0413 20:10:08.387295 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-sys-fs\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.388081 kubelet[2578]: I0413 20:10:08.387316 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce4eff57-8a87-40fc-9408-ee2668c17617-tigera-ca-bundle\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.388081 kubelet[2578]: I0413 20:10:08.387341 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-var-run-calico\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.388081 kubelet[2578]: I0413 20:10:08.387362 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ce4eff57-8a87-40fc-9408-ee2668c17617-cni-bin-dir\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.388081 kubelet[2578]: I0413 20:10:08.387396 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ce4eff57-8a87-40fc-9408-ee2668c17617-node-certs\") pod \"calico-node-8sv8w\" (UID: \"ce4eff57-8a87-40fc-9408-ee2668c17617\") " pod="calico-system/calico-node-8sv8w" Apr 13 20:10:08.455784 kubelet[2578]: E0413 20:10:08.455737 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mzgf" podUID="b083f9b4-7da6-4a64-b37b-aa5d508c2e7f" Apr 13 20:10:08.489571 kubelet[2578]: I0413 20:10:08.488299 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b083f9b4-7da6-4a64-b37b-aa5d508c2e7f-kubelet-dir\") pod \"csi-node-driver-4mzgf\" (UID: \"b083f9b4-7da6-4a64-b37b-aa5d508c2e7f\") " pod="calico-system/csi-node-driver-4mzgf" Apr 13 20:10:08.489571 kubelet[2578]: I0413 20:10:08.488487 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b083f9b4-7da6-4a64-b37b-aa5d508c2e7f-registration-dir\") pod \"csi-node-driver-4mzgf\" (UID: \"b083f9b4-7da6-4a64-b37b-aa5d508c2e7f\") " pod="calico-system/csi-node-driver-4mzgf" Apr 13 20:10:08.489571 kubelet[2578]: I0413 20:10:08.488518 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b083f9b4-7da6-4a64-b37b-aa5d508c2e7f-socket-dir\") pod \"csi-node-driver-4mzgf\" (UID: \"b083f9b4-7da6-4a64-b37b-aa5d508c2e7f\") " pod="calico-system/csi-node-driver-4mzgf" Apr 13 20:10:08.489571 kubelet[2578]: I0413 20:10:08.488542 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b083f9b4-7da6-4a64-b37b-aa5d508c2e7f-varrun\") pod \"csi-node-driver-4mzgf\" (UID: \"b083f9b4-7da6-4a64-b37b-aa5d508c2e7f\") " pod="calico-system/csi-node-driver-4mzgf" Apr 13 20:10:08.489571 kubelet[2578]: I0413 20:10:08.488592 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npfp6\" (UniqueName: \"kubernetes.io/projected/b083f9b4-7da6-4a64-b37b-aa5d508c2e7f-kube-api-access-npfp6\") pod \"csi-node-driver-4mzgf\" (UID: \"b083f9b4-7da6-4a64-b37b-aa5d508c2e7f\") " pod="calico-system/csi-node-driver-4mzgf" Apr 13 20:10:08.503288 kubelet[2578]: E0413 20:10:08.503259 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.503571 kubelet[2578]: W0413 20:10:08.503484 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.504637 kubelet[2578]: E0413 20:10:08.504616 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.520756 kubelet[2578]: E0413 20:10:08.520703 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.520756 kubelet[2578]: W0413 20:10:08.520744 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.520877 kubelet[2578]: E0413 20:10:08.520776 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.546333 kubelet[2578]: E0413 20:10:08.546297 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:08.548102 containerd[1472]: time="2026-04-13T20:10:08.547917041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8c95ffc4b-rtqw8,Uid:7033d77a-c4a0-42a3-94e7-484ba423d29a,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:08.572541 containerd[1472]: time="2026-04-13T20:10:08.571771389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:08.572541 containerd[1472]: time="2026-04-13T20:10:08.571834328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:08.572541 containerd[1472]: time="2026-04-13T20:10:08.571855528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:08.572541 containerd[1472]: time="2026-04-13T20:10:08.571956417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:08.590150 kubelet[2578]: E0413 20:10:08.590114 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.590150 kubelet[2578]: W0413 20:10:08.590142 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.590667 kubelet[2578]: E0413 20:10:08.590167 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.590667 kubelet[2578]: E0413 20:10:08.590587 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.590667 kubelet[2578]: W0413 20:10:08.590597 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.590667 kubelet[2578]: E0413 20:10:08.590608 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.591771 kubelet[2578]: E0413 20:10:08.590855 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.591771 kubelet[2578]: W0413 20:10:08.590864 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.591771 kubelet[2578]: E0413 20:10:08.590873 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.591771 kubelet[2578]: E0413 20:10:08.591077 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.591771 kubelet[2578]: W0413 20:10:08.591085 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.591771 kubelet[2578]: E0413 20:10:08.591094 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.591771 kubelet[2578]: E0413 20:10:08.591299 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.591771 kubelet[2578]: W0413 20:10:08.591308 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.591771 kubelet[2578]: E0413 20:10:08.591316 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.591771 kubelet[2578]: E0413 20:10:08.591609 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.591970 kubelet[2578]: W0413 20:10:08.591618 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.591970 kubelet[2578]: E0413 20:10:08.591628 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.591970 kubelet[2578]: E0413 20:10:08.591901 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.591970 kubelet[2578]: W0413 20:10:08.591910 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.591970 kubelet[2578]: E0413 20:10:08.591920 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.592405 kubelet[2578]: E0413 20:10:08.592390 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.592474 kubelet[2578]: W0413 20:10:08.592405 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.592474 kubelet[2578]: E0413 20:10:08.592444 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.593059 kubelet[2578]: E0413 20:10:08.593038 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.593059 kubelet[2578]: W0413 20:10:08.593057 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.593126 kubelet[2578]: E0413 20:10:08.593073 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.593570 kubelet[2578]: E0413 20:10:08.593410 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.593570 kubelet[2578]: W0413 20:10:08.593442 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.593570 kubelet[2578]: E0413 20:10:08.593457 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.593901 kubelet[2578]: E0413 20:10:08.593878 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.593901 kubelet[2578]: W0413 20:10:08.593890 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.593901 kubelet[2578]: E0413 20:10:08.593899 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.594262 kubelet[2578]: E0413 20:10:08.594244 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.594262 kubelet[2578]: W0413 20:10:08.594260 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.594409 kubelet[2578]: E0413 20:10:08.594270 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.594576 kubelet[2578]: E0413 20:10:08.594561 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.594610 kubelet[2578]: W0413 20:10:08.594576 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.594610 kubelet[2578]: E0413 20:10:08.594584 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.594826 kubelet[2578]: E0413 20:10:08.594809 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.594852 kubelet[2578]: W0413 20:10:08.594824 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.594852 kubelet[2578]: E0413 20:10:08.594843 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.595185 kubelet[2578]: E0413 20:10:08.595169 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.595185 kubelet[2578]: W0413 20:10:08.595182 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.595324 kubelet[2578]: E0413 20:10:08.595192 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.595691 kubelet[2578]: E0413 20:10:08.595575 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.595691 kubelet[2578]: W0413 20:10:08.595589 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.595691 kubelet[2578]: E0413 20:10:08.595612 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.595839 kubelet[2578]: E0413 20:10:08.595823 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.595999 kubelet[2578]: W0413 20:10:08.595926 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.595999 kubelet[2578]: E0413 20:10:08.595946 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.596728 kubelet[2578]: E0413 20:10:08.596315 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.596728 kubelet[2578]: W0413 20:10:08.596338 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.596728 kubelet[2578]: E0413 20:10:08.596348 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.596581 systemd[1]: Started cri-containerd-bf1fc0367e1154a021727d87fc1d2a1863ae947c23f7d06b0cec77991997bef0.scope - libcontainer container bf1fc0367e1154a021727d87fc1d2a1863ae947c23f7d06b0cec77991997bef0. Apr 13 20:10:08.598180 kubelet[2578]: E0413 20:10:08.597741 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.598180 kubelet[2578]: W0413 20:10:08.597754 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.598180 kubelet[2578]: E0413 20:10:08.597803 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.598792 kubelet[2578]: E0413 20:10:08.598778 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.598952 kubelet[2578]: W0413 20:10:08.598861 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.598952 kubelet[2578]: E0413 20:10:08.598876 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.600329 kubelet[2578]: E0413 20:10:08.600111 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.600329 kubelet[2578]: W0413 20:10:08.600174 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.600329 kubelet[2578]: E0413 20:10:08.600188 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.601245 kubelet[2578]: E0413 20:10:08.601125 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.601245 kubelet[2578]: W0413 20:10:08.601137 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.601245 kubelet[2578]: E0413 20:10:08.601159 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.602077 kubelet[2578]: E0413 20:10:08.602065 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.602312 kubelet[2578]: W0413 20:10:08.602253 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.602535 kubelet[2578]: E0413 20:10:08.602462 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.603087 kubelet[2578]: E0413 20:10:08.603021 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.603087 kubelet[2578]: W0413 20:10:08.603032 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.603087 kubelet[2578]: E0413 20:10:08.603070 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.604060 kubelet[2578]: E0413 20:10:08.603994 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.604060 kubelet[2578]: W0413 20:10:08.604005 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.604060 kubelet[2578]: E0413 20:10:08.604031 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.613034 kubelet[2578]: E0413 20:10:08.612971 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:08.613034 kubelet[2578]: W0413 20:10:08.612986 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:08.613034 kubelet[2578]: E0413 20:10:08.613002 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:08.638330 containerd[1472]: time="2026-04-13T20:10:08.637948380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8sv8w,Uid:ce4eff57-8a87-40fc-9408-ee2668c17617,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:08.657395 containerd[1472]: time="2026-04-13T20:10:08.657352838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8c95ffc4b-rtqw8,Uid:7033d77a-c4a0-42a3-94e7-484ba423d29a,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf1fc0367e1154a021727d87fc1d2a1863ae947c23f7d06b0cec77991997bef0\"" Apr 13 20:10:08.659372 kubelet[2578]: E0413 20:10:08.659011 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:08.660502 containerd[1472]: time="2026-04-13T20:10:08.660477590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:10:08.678486 containerd[1472]: time="2026-04-13T20:10:08.678355701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:08.678699 containerd[1472]: time="2026-04-13T20:10:08.678653638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:08.678936 containerd[1472]: time="2026-04-13T20:10:08.678802737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:08.679222 containerd[1472]: time="2026-04-13T20:10:08.679141444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:08.704755 systemd[1]: Started cri-containerd-2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c.scope - libcontainer container 2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c. Apr 13 20:10:08.737437 containerd[1472]: time="2026-04-13T20:10:08.737389056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8sv8w,Uid:ce4eff57-8a87-40fc-9408-ee2668c17617,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c\"" Apr 13 20:10:09.475254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3463794401.mount: Deactivated successfully. Apr 13 20:10:09.924460 containerd[1472]: time="2026-04-13T20:10:09.923853786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:09.924873 containerd[1472]: time="2026-04-13T20:10:09.924628540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:10:09.925810 containerd[1472]: time="2026-04-13T20:10:09.925789190Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:09.928487 containerd[1472]: time="2026-04-13T20:10:09.927453926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:09.928487 containerd[1472]: time="2026-04-13T20:10:09.928368749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.26773961s" Apr 13 20:10:09.928487 containerd[1472]: time="2026-04-13T20:10:09.928393358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:10:09.930543 containerd[1472]: time="2026-04-13T20:10:09.929627248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:10:09.954760 containerd[1472]: time="2026-04-13T20:10:09.954706059Z" level=info msg="CreateContainer within sandbox \"bf1fc0367e1154a021727d87fc1d2a1863ae947c23f7d06b0cec77991997bef0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:10:09.962807 containerd[1472]: time="2026-04-13T20:10:09.962782232Z" level=info msg="CreateContainer within sandbox \"bf1fc0367e1154a021727d87fc1d2a1863ae947c23f7d06b0cec77991997bef0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bc361a488f13bb1ea0d1984c96de171c6a9f8d55f8835b3b2f435efbc8dcc89a\"" Apr 13 20:10:09.963470 containerd[1472]: time="2026-04-13T20:10:09.963238358Z" level=info msg="StartContainer for \"bc361a488f13bb1ea0d1984c96de171c6a9f8d55f8835b3b2f435efbc8dcc89a\"" Apr 13 20:10:09.993592 systemd[1]: Started cri-containerd-bc361a488f13bb1ea0d1984c96de171c6a9f8d55f8835b3b2f435efbc8dcc89a.scope - libcontainer container bc361a488f13bb1ea0d1984c96de171c6a9f8d55f8835b3b2f435efbc8dcc89a. Apr 13 20:10:10.038736 containerd[1472]: time="2026-04-13T20:10:10.038698649Z" level=info msg="StartContainer for \"bc361a488f13bb1ea0d1984c96de171c6a9f8d55f8835b3b2f435efbc8dcc89a\" returns successfully" Apr 13 20:10:10.479120 kubelet[2578]: E0413 20:10:10.477549 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mzgf" podUID="b083f9b4-7da6-4a64-b37b-aa5d508c2e7f" Apr 13 20:10:10.570174 kubelet[2578]: E0413 20:10:10.569686 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:10.586914 kubelet[2578]: I0413 20:10:10.585442 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8c95ffc4b-rtqw8" podStartSLOduration=1.316413036 podStartE2EDuration="2.585412495s" podCreationTimestamp="2026-04-13 20:10:08 +0000 UTC" firstStartedPulling="2026-04-13 20:10:08.660172613 +0000 UTC m=+18.284707405" lastFinishedPulling="2026-04-13 20:10:09.929172072 +0000 UTC m=+19.553706864" observedRunningTime="2026-04-13 20:10:10.585288626 +0000 UTC m=+20.209823428" watchObservedRunningTime="2026-04-13 20:10:10.585412495 +0000 UTC m=+20.209947287" Apr 13 20:10:10.591271 kubelet[2578]: E0413 20:10:10.591241 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.591543 kubelet[2578]: W0413 20:10:10.591400 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.591543 kubelet[2578]: E0413 20:10:10.591474 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.592166 kubelet[2578]: E0413 20:10:10.591921 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.592166 kubelet[2578]: W0413 20:10:10.591932 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.592166 kubelet[2578]: E0413 20:10:10.591942 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.592692 kubelet[2578]: E0413 20:10:10.592680 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.592775 kubelet[2578]: W0413 20:10:10.592763 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.592896 kubelet[2578]: E0413 20:10:10.592839 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.593254 kubelet[2578]: E0413 20:10:10.593225 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.593476 kubelet[2578]: W0413 20:10:10.593343 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.593476 kubelet[2578]: E0413 20:10:10.593359 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.594061 kubelet[2578]: E0413 20:10:10.593852 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.594061 kubelet[2578]: W0413 20:10:10.593882 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.594061 kubelet[2578]: E0413 20:10:10.593892 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.595463 kubelet[2578]: E0413 20:10:10.594850 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.595463 kubelet[2578]: W0413 20:10:10.594861 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.595463 kubelet[2578]: E0413 20:10:10.594890 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.595463 kubelet[2578]: E0413 20:10:10.595152 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.595463 kubelet[2578]: W0413 20:10:10.595161 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.595463 kubelet[2578]: E0413 20:10:10.595169 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.596103 kubelet[2578]: E0413 20:10:10.596091 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.596191 kubelet[2578]: W0413 20:10:10.596178 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.596301 kubelet[2578]: E0413 20:10:10.596248 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.597179 kubelet[2578]: E0413 20:10:10.597059 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.597179 kubelet[2578]: W0413 20:10:10.597070 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.597179 kubelet[2578]: E0413 20:10:10.597101 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.597509 kubelet[2578]: E0413 20:10:10.597435 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.597509 kubelet[2578]: W0413 20:10:10.597446 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.597509 kubelet[2578]: E0413 20:10:10.597455 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.599197 kubelet[2578]: E0413 20:10:10.598777 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.599197 kubelet[2578]: W0413 20:10:10.598789 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.599197 kubelet[2578]: E0413 20:10:10.598798 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.599371 kubelet[2578]: E0413 20:10:10.599360 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.599450 kubelet[2578]: W0413 20:10:10.599438 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.599505 kubelet[2578]: E0413 20:10:10.599495 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.600087 kubelet[2578]: E0413 20:10:10.600076 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.600146 kubelet[2578]: W0413 20:10:10.600135 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.600195 kubelet[2578]: E0413 20:10:10.600185 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.602214 kubelet[2578]: E0413 20:10:10.602196 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.602214 kubelet[2578]: W0413 20:10:10.602211 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.602298 kubelet[2578]: E0413 20:10:10.602221 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.602817 kubelet[2578]: E0413 20:10:10.602783 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.602817 kubelet[2578]: W0413 20:10:10.602798 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.602817 kubelet[2578]: E0413 20:10:10.602807 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.607080 kubelet[2578]: E0413 20:10:10.606764 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.607080 kubelet[2578]: W0413 20:10:10.606812 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.607080 kubelet[2578]: E0413 20:10:10.606823 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.607418 kubelet[2578]: E0413 20:10:10.607407 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.607505 kubelet[2578]: W0413 20:10:10.607494 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.607580 kubelet[2578]: E0413 20:10:10.607545 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.608019 kubelet[2578]: E0413 20:10:10.607910 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.608019 kubelet[2578]: W0413 20:10:10.607922 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.608019 kubelet[2578]: E0413 20:10:10.607932 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.608440 kubelet[2578]: E0413 20:10:10.608337 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.608440 kubelet[2578]: W0413 20:10:10.608347 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.608440 kubelet[2578]: E0413 20:10:10.608356 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.608866 kubelet[2578]: E0413 20:10:10.608799 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.608866 kubelet[2578]: W0413 20:10:10.608809 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.608866 kubelet[2578]: E0413 20:10:10.608818 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.611101 kubelet[2578]: E0413 20:10:10.610771 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.611101 kubelet[2578]: W0413 20:10:10.610782 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.611101 kubelet[2578]: E0413 20:10:10.610791 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.611355 kubelet[2578]: E0413 20:10:10.611210 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.611355 kubelet[2578]: W0413 20:10:10.611219 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.611355 kubelet[2578]: E0413 20:10:10.611228 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.611735 kubelet[2578]: E0413 20:10:10.611723 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.611873 kubelet[2578]: W0413 20:10:10.611782 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.611873 kubelet[2578]: E0413 20:10:10.611795 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.612091 kubelet[2578]: E0413 20:10:10.612080 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.612256 kubelet[2578]: W0413 20:10:10.612163 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.612256 kubelet[2578]: E0413 20:10:10.612176 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.612496 kubelet[2578]: E0413 20:10:10.612485 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.612661 kubelet[2578]: W0413 20:10:10.612555 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.612661 kubelet[2578]: E0413 20:10:10.612567 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.612942 kubelet[2578]: E0413 20:10:10.612931 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.613014 kubelet[2578]: W0413 20:10:10.613002 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.613150 kubelet[2578]: E0413 20:10:10.613062 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.613936 kubelet[2578]: E0413 20:10:10.613901 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.613936 kubelet[2578]: W0413 20:10:10.613913 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.613936 kubelet[2578]: E0413 20:10:10.613923 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.614333 kubelet[2578]: E0413 20:10:10.614322 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.614565 kubelet[2578]: W0413 20:10:10.614376 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.614565 kubelet[2578]: E0413 20:10:10.614386 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.615397 kubelet[2578]: E0413 20:10:10.615325 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.615397 kubelet[2578]: W0413 20:10:10.615336 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.615397 kubelet[2578]: E0413 20:10:10.615345 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.615966 kubelet[2578]: E0413 20:10:10.615879 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.615966 kubelet[2578]: W0413 20:10:10.615894 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.615966 kubelet[2578]: E0413 20:10:10.615936 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.616565 kubelet[2578]: E0413 20:10:10.616553 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.616658 kubelet[2578]: W0413 20:10:10.616646 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.616799 kubelet[2578]: E0413 20:10:10.616701 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.617102 kubelet[2578]: E0413 20:10:10.617092 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.617170 kubelet[2578]: W0413 20:10:10.617160 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.617331 kubelet[2578]: E0413 20:10:10.617216 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.618263 kubelet[2578]: E0413 20:10:10.618252 2578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:10.618405 kubelet[2578]: W0413 20:10:10.618388 2578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:10.618521 kubelet[2578]: E0413 20:10:10.618508 2578 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:10.668320 containerd[1472]: time="2026-04-13T20:10:10.668270597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:10.669317 containerd[1472]: time="2026-04-13T20:10:10.669278439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:10:10.669819 containerd[1472]: time="2026-04-13T20:10:10.669760835Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:10.673122 containerd[1472]: time="2026-04-13T20:10:10.672073537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:10.673122 containerd[1472]: time="2026-04-13T20:10:10.672924571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 743.249463ms" Apr 13 20:10:10.673122 containerd[1472]: time="2026-04-13T20:10:10.672947961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:10:10.676184 containerd[1472]: time="2026-04-13T20:10:10.676138926Z" level=info msg="CreateContainer within sandbox \"2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:10:10.700582 containerd[1472]: time="2026-04-13T20:10:10.700539165Z" level=info msg="CreateContainer within sandbox \"2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b1315e8b932218542c41a3b4a58ba84ce161fa7600c738c5449fb0e1f4197b8d\"" Apr 13 20:10:10.701477 containerd[1472]: time="2026-04-13T20:10:10.701451008Z" level=info msg="StartContainer for \"b1315e8b932218542c41a3b4a58ba84ce161fa7600c738c5449fb0e1f4197b8d\"" Apr 13 20:10:10.741578 systemd[1]: Started cri-containerd-b1315e8b932218542c41a3b4a58ba84ce161fa7600c738c5449fb0e1f4197b8d.scope - libcontainer container b1315e8b932218542c41a3b4a58ba84ce161fa7600c738c5449fb0e1f4197b8d. Apr 13 20:10:10.778261 containerd[1472]: time="2026-04-13T20:10:10.778230057Z" level=info msg="StartContainer for \"b1315e8b932218542c41a3b4a58ba84ce161fa7600c738c5449fb0e1f4197b8d\" returns successfully" Apr 13 20:10:10.797095 systemd[1]: cri-containerd-b1315e8b932218542c41a3b4a58ba84ce161fa7600c738c5449fb0e1f4197b8d.scope: Deactivated successfully. Apr 13 20:10:10.824276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1315e8b932218542c41a3b4a58ba84ce161fa7600c738c5449fb0e1f4197b8d-rootfs.mount: Deactivated successfully. Apr 13 20:10:10.893207 containerd[1472]: time="2026-04-13T20:10:10.893114289Z" level=info msg="shim disconnected" id=b1315e8b932218542c41a3b4a58ba84ce161fa7600c738c5449fb0e1f4197b8d namespace=k8s.io Apr 13 20:10:10.893207 containerd[1472]: time="2026-04-13T20:10:10.893171819Z" level=warning msg="cleaning up after shim disconnected" id=b1315e8b932218542c41a3b4a58ba84ce161fa7600c738c5449fb0e1f4197b8d namespace=k8s.io Apr 13 20:10:10.893207 containerd[1472]: time="2026-04-13T20:10:10.893181439Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:10:11.572946 kubelet[2578]: I0413 20:10:11.572902 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:11.573478 kubelet[2578]: E0413 20:10:11.573269 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:11.575495 containerd[1472]: time="2026-04-13T20:10:11.575278308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:10:12.482130 kubelet[2578]: E0413 20:10:12.482038 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mzgf" podUID="b083f9b4-7da6-4a64-b37b-aa5d508c2e7f" Apr 13 20:10:14.480303 kubelet[2578]: E0413 20:10:14.480029 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mzgf" podUID="b083f9b4-7da6-4a64-b37b-aa5d508c2e7f" Apr 13 20:10:15.059949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3190603365.mount: Deactivated successfully. Apr 13 20:10:15.094846 containerd[1472]: time="2026-04-13T20:10:15.094787508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:15.095896 containerd[1472]: time="2026-04-13T20:10:15.095838692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:10:15.097448 containerd[1472]: time="2026-04-13T20:10:15.096239150Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:15.098312 containerd[1472]: time="2026-04-13T20:10:15.098283028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:15.099125 containerd[1472]: time="2026-04-13T20:10:15.099099464Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.523789156s" Apr 13 20:10:15.099205 containerd[1472]: time="2026-04-13T20:10:15.099189213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:10:15.103309 containerd[1472]: time="2026-04-13T20:10:15.103285250Z" level=info msg="CreateContainer within sandbox \"2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:10:15.118198 containerd[1472]: time="2026-04-13T20:10:15.118166556Z" level=info msg="CreateContainer within sandbox \"2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"87edff5fbf29958083383d16dd6ae61f528064d2ac73d98a5af5bb35e5faf46a\"" Apr 13 20:10:15.118757 containerd[1472]: time="2026-04-13T20:10:15.118720333Z" level=info msg="StartContainer for \"87edff5fbf29958083383d16dd6ae61f528064d2ac73d98a5af5bb35e5faf46a\"" Apr 13 20:10:15.151556 systemd[1]: Started cri-containerd-87edff5fbf29958083383d16dd6ae61f528064d2ac73d98a5af5bb35e5faf46a.scope - libcontainer container 87edff5fbf29958083383d16dd6ae61f528064d2ac73d98a5af5bb35e5faf46a. Apr 13 20:10:15.181171 containerd[1472]: time="2026-04-13T20:10:15.181121009Z" level=info msg="StartContainer for \"87edff5fbf29958083383d16dd6ae61f528064d2ac73d98a5af5bb35e5faf46a\" returns successfully" Apr 13 20:10:15.230840 systemd[1]: cri-containerd-87edff5fbf29958083383d16dd6ae61f528064d2ac73d98a5af5bb35e5faf46a.scope: Deactivated successfully. Apr 13 20:10:15.332342 containerd[1472]: time="2026-04-13T20:10:15.332207704Z" level=info msg="shim disconnected" id=87edff5fbf29958083383d16dd6ae61f528064d2ac73d98a5af5bb35e5faf46a namespace=k8s.io Apr 13 20:10:15.332342 containerd[1472]: time="2026-04-13T20:10:15.332255174Z" level=warning msg="cleaning up after shim disconnected" id=87edff5fbf29958083383d16dd6ae61f528064d2ac73d98a5af5bb35e5faf46a namespace=k8s.io Apr 13 20:10:15.332342 containerd[1472]: time="2026-04-13T20:10:15.332264614Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:10:15.585926 containerd[1472]: time="2026-04-13T20:10:15.584939354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:10:16.061032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87edff5fbf29958083383d16dd6ae61f528064d2ac73d98a5af5bb35e5faf46a-rootfs.mount: Deactivated successfully. Apr 13 20:10:16.477595 kubelet[2578]: E0413 20:10:16.477557 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mzgf" podUID="b083f9b4-7da6-4a64-b37b-aa5d508c2e7f" Apr 13 20:10:17.385403 containerd[1472]: time="2026-04-13T20:10:17.385347279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:17.386529 containerd[1472]: time="2026-04-13T20:10:17.386325234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:10:17.388398 containerd[1472]: time="2026-04-13T20:10:17.386985971Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:17.389901 containerd[1472]: time="2026-04-13T20:10:17.389866757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:17.391041 containerd[1472]: time="2026-04-13T20:10:17.390782152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.805787078s" Apr 13 20:10:17.391041 containerd[1472]: time="2026-04-13T20:10:17.390812092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:10:17.395502 containerd[1472]: time="2026-04-13T20:10:17.395458289Z" level=info msg="CreateContainer within sandbox \"2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:10:17.410317 containerd[1472]: time="2026-04-13T20:10:17.410263005Z" level=info msg="CreateContainer within sandbox \"2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c9195c93e141d9697bf2a85ea0cfd8653581e954dc1f44f076ab38a92a9ab537\"" Apr 13 20:10:17.413207 containerd[1472]: time="2026-04-13T20:10:17.413178201Z" level=info msg="StartContainer for \"c9195c93e141d9697bf2a85ea0cfd8653581e954dc1f44f076ab38a92a9ab537\"" Apr 13 20:10:17.447600 systemd[1]: run-containerd-runc-k8s.io-c9195c93e141d9697bf2a85ea0cfd8653581e954dc1f44f076ab38a92a9ab537-runc.4MS6Dv.mount: Deactivated successfully. Apr 13 20:10:17.460567 systemd[1]: Started cri-containerd-c9195c93e141d9697bf2a85ea0cfd8653581e954dc1f44f076ab38a92a9ab537.scope - libcontainer container c9195c93e141d9697bf2a85ea0cfd8653581e954dc1f44f076ab38a92a9ab537. Apr 13 20:10:17.493914 containerd[1472]: time="2026-04-13T20:10:17.493866149Z" level=info msg="StartContainer for \"c9195c93e141d9697bf2a85ea0cfd8653581e954dc1f44f076ab38a92a9ab537\" returns successfully" Apr 13 20:10:17.547709 kubelet[2578]: I0413 20:10:17.546956 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:17.547709 kubelet[2578]: E0413 20:10:17.547303 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:17.597090 kubelet[2578]: E0413 20:10:17.596963 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:18.080027 containerd[1472]: time="2026-04-13T20:10:18.079969088Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:10:18.084156 systemd[1]: cri-containerd-c9195c93e141d9697bf2a85ea0cfd8653581e954dc1f44f076ab38a92a9ab537.scope: Deactivated successfully. Apr 13 20:10:18.109879 kubelet[2578]: I0413 20:10:18.109765 2578 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 13 20:10:18.112025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9195c93e141d9697bf2a85ea0cfd8653581e954dc1f44f076ab38a92a9ab537-rootfs.mount: Deactivated successfully. Apr 13 20:10:18.144401 containerd[1472]: time="2026-04-13T20:10:18.144311668Z" level=info msg="shim disconnected" id=c9195c93e141d9697bf2a85ea0cfd8653581e954dc1f44f076ab38a92a9ab537 namespace=k8s.io Apr 13 20:10:18.144401 containerd[1472]: time="2026-04-13T20:10:18.144388197Z" level=warning msg="cleaning up after shim disconnected" id=c9195c93e141d9697bf2a85ea0cfd8653581e954dc1f44f076ab38a92a9ab537 namespace=k8s.io Apr 13 20:10:18.144401 containerd[1472]: time="2026-04-13T20:10:18.144397787Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:10:18.162802 containerd[1472]: time="2026-04-13T20:10:18.161819196Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:10:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:10:18.184267 systemd[1]: Created slice kubepods-burstable-pod49c0b7cf_67f2_43e0_b1b8_972c29e78e65.slice - libcontainer container kubepods-burstable-pod49c0b7cf_67f2_43e0_b1b8_972c29e78e65.slice. Apr 13 20:10:18.204907 systemd[1]: Created slice kubepods-besteffort-pod92586f17_ea3a_4af3_aa4e_c720d02f8e41.slice - libcontainer container kubepods-besteffort-pod92586f17_ea3a_4af3_aa4e_c720d02f8e41.slice. Apr 13 20:10:18.213015 systemd[1]: Created slice kubepods-besteffort-podab7d1268_0475_4d90_b5c2_1c8713e6aafb.slice - libcontainer container kubepods-besteffort-podab7d1268_0475_4d90_b5c2_1c8713e6aafb.slice. Apr 13 20:10:18.223915 systemd[1]: Created slice kubepods-burstable-pod5778e305_4fb3_40cf_9eb5_2894d58c2771.slice - libcontainer container kubepods-burstable-pod5778e305_4fb3_40cf_9eb5_2894d58c2771.slice. Apr 13 20:10:18.233404 systemd[1]: Created slice kubepods-besteffort-pod3d4eb2d5_db0e_4d66_8113_637f0e2427c6.slice - libcontainer container kubepods-besteffort-pod3d4eb2d5_db0e_4d66_8113_637f0e2427c6.slice. Apr 13 20:10:18.246725 systemd[1]: Created slice kubepods-besteffort-pod41a53f62_b7f4_40f3_882b_8cc9702c76d5.slice - libcontainer container kubepods-besteffort-pod41a53f62_b7f4_40f3_882b_8cc9702c76d5.slice. Apr 13 20:10:18.248435 systemd[1]: Created slice kubepods-besteffort-pod46975966_bb29_4145_9c1a_fe60aed66e16.slice - libcontainer container kubepods-besteffort-pod46975966_bb29_4145_9c1a_fe60aed66e16.slice. Apr 13 20:10:18.268725 kubelet[2578]: I0413 20:10:18.268574 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49c0b7cf-67f2-43e0-b1b8-972c29e78e65-config-volume\") pod \"coredns-66bc5c9577-glg4w\" (UID: \"49c0b7cf-67f2-43e0-b1b8-972c29e78e65\") " pod="kube-system/coredns-66bc5c9577-glg4w" Apr 13 20:10:18.268725 kubelet[2578]: I0413 20:10:18.268610 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vl7d\" (UniqueName: \"kubernetes.io/projected/5778e305-4fb3-40cf-9eb5-2894d58c2771-kube-api-access-8vl7d\") pod \"coredns-66bc5c9577-vdbdt\" (UID: \"5778e305-4fb3-40cf-9eb5-2894d58c2771\") " pod="kube-system/coredns-66bc5c9577-vdbdt" Apr 13 20:10:18.268725 kubelet[2578]: I0413 20:10:18.268630 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85w8b\" (UniqueName: \"kubernetes.io/projected/41a53f62-b7f4-40f3-882b-8cc9702c76d5-kube-api-access-85w8b\") pod \"calico-apiserver-6df68c9d4f-jpz9w\" (UID: \"41a53f62-b7f4-40f3-882b-8cc9702c76d5\") " pod="calico-system/calico-apiserver-6df68c9d4f-jpz9w" Apr 13 20:10:18.268725 kubelet[2578]: I0413 20:10:18.268646 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7ccz\" (UniqueName: \"kubernetes.io/projected/49c0b7cf-67f2-43e0-b1b8-972c29e78e65-kube-api-access-r7ccz\") pod \"coredns-66bc5c9577-glg4w\" (UID: \"49c0b7cf-67f2-43e0-b1b8-972c29e78e65\") " pod="kube-system/coredns-66bc5c9577-glg4w" Apr 13 20:10:18.268725 kubelet[2578]: I0413 20:10:18.268661 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/46975966-bb29-4145-9c1a-fe60aed66e16-calico-apiserver-certs\") pod \"calico-apiserver-6df68c9d4f-lx9f8\" (UID: \"46975966-bb29-4145-9c1a-fe60aed66e16\") " pod="calico-system/calico-apiserver-6df68c9d4f-lx9f8" Apr 13 20:10:18.269020 kubelet[2578]: I0413 20:10:18.268680 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpq68\" (UniqueName: \"kubernetes.io/projected/46975966-bb29-4145-9c1a-fe60aed66e16-kube-api-access-vpq68\") pod \"calico-apiserver-6df68c9d4f-lx9f8\" (UID: \"46975966-bb29-4145-9c1a-fe60aed66e16\") " pod="calico-system/calico-apiserver-6df68c9d4f-lx9f8" Apr 13 20:10:18.269020 kubelet[2578]: I0413 20:10:18.268723 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92586f17-ea3a-4af3-aa4e-c720d02f8e41-whisker-backend-key-pair\") pod \"whisker-75b5998949-fr9s5\" (UID: \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\") " pod="calico-system/whisker-75b5998949-fr9s5" Apr 13 20:10:18.269020 kubelet[2578]: I0413 20:10:18.268752 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92586f17-ea3a-4af3-aa4e-c720d02f8e41-whisker-ca-bundle\") pod \"whisker-75b5998949-fr9s5\" (UID: \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\") " pod="calico-system/whisker-75b5998949-fr9s5" Apr 13 20:10:18.269020 kubelet[2578]: I0413 20:10:18.268775 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3d4eb2d5-db0e-4d66-8113-637f0e2427c6-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-dvmjq\" (UID: \"3d4eb2d5-db0e-4d66-8113-637f0e2427c6\") " pod="calico-system/goldmane-cccfbd5cf-dvmjq" Apr 13 20:10:18.269020 kubelet[2578]: I0413 20:10:18.268799 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4gdb\" (UniqueName: \"kubernetes.io/projected/92586f17-ea3a-4af3-aa4e-c720d02f8e41-kube-api-access-g4gdb\") pod \"whisker-75b5998949-fr9s5\" (UID: \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\") " pod="calico-system/whisker-75b5998949-fr9s5" Apr 13 20:10:18.269138 kubelet[2578]: I0413 20:10:18.268815 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99qv\" (UniqueName: \"kubernetes.io/projected/ab7d1268-0475-4d90-b5c2-1c8713e6aafb-kube-api-access-v99qv\") pod \"calico-kube-controllers-7c7c48779c-gk7jr\" (UID: \"ab7d1268-0475-4d90-b5c2-1c8713e6aafb\") " pod="calico-system/calico-kube-controllers-7c7c48779c-gk7jr" Apr 13 20:10:18.269138 kubelet[2578]: I0413 20:10:18.268861 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d4eb2d5-db0e-4d66-8113-637f0e2427c6-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-dvmjq\" (UID: \"3d4eb2d5-db0e-4d66-8113-637f0e2427c6\") " pod="calico-system/goldmane-cccfbd5cf-dvmjq" Apr 13 20:10:18.269138 kubelet[2578]: I0413 20:10:18.268884 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/41a53f62-b7f4-40f3-882b-8cc9702c76d5-calico-apiserver-certs\") pod \"calico-apiserver-6df68c9d4f-jpz9w\" (UID: \"41a53f62-b7f4-40f3-882b-8cc9702c76d5\") " pod="calico-system/calico-apiserver-6df68c9d4f-jpz9w" Apr 13 20:10:18.269138 kubelet[2578]: I0413 20:10:18.268919 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5778e305-4fb3-40cf-9eb5-2894d58c2771-config-volume\") pod \"coredns-66bc5c9577-vdbdt\" (UID: \"5778e305-4fb3-40cf-9eb5-2894d58c2771\") " pod="kube-system/coredns-66bc5c9577-vdbdt" Apr 13 20:10:18.269138 kubelet[2578]: I0413 20:10:18.268939 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mtmg\" (UniqueName: \"kubernetes.io/projected/3d4eb2d5-db0e-4d66-8113-637f0e2427c6-kube-api-access-4mtmg\") pod \"goldmane-cccfbd5cf-dvmjq\" (UID: \"3d4eb2d5-db0e-4d66-8113-637f0e2427c6\") " pod="calico-system/goldmane-cccfbd5cf-dvmjq" Apr 13 20:10:18.269252 kubelet[2578]: I0413 20:10:18.268997 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/92586f17-ea3a-4af3-aa4e-c720d02f8e41-nginx-config\") pod \"whisker-75b5998949-fr9s5\" (UID: \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\") " pod="calico-system/whisker-75b5998949-fr9s5" Apr 13 20:10:18.269252 kubelet[2578]: I0413 20:10:18.269022 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab7d1268-0475-4d90-b5c2-1c8713e6aafb-tigera-ca-bundle\") pod \"calico-kube-controllers-7c7c48779c-gk7jr\" (UID: \"ab7d1268-0475-4d90-b5c2-1c8713e6aafb\") " pod="calico-system/calico-kube-controllers-7c7c48779c-gk7jr" Apr 13 20:10:18.269252 kubelet[2578]: I0413 20:10:18.269044 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d4eb2d5-db0e-4d66-8113-637f0e2427c6-config\") pod \"goldmane-cccfbd5cf-dvmjq\" (UID: \"3d4eb2d5-db0e-4d66-8113-637f0e2427c6\") " pod="calico-system/goldmane-cccfbd5cf-dvmjq" Apr 13 20:10:18.487962 systemd[1]: Created slice kubepods-besteffort-podb083f9b4_7da6_4a64_b37b_aa5d508c2e7f.slice - libcontainer container kubepods-besteffort-podb083f9b4_7da6_4a64_b37b_aa5d508c2e7f.slice. Apr 13 20:10:18.493206 containerd[1472]: time="2026-04-13T20:10:18.493176390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mzgf,Uid:b083f9b4-7da6-4a64-b37b-aa5d508c2e7f,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:18.495777 kubelet[2578]: E0413 20:10:18.495354 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:18.497130 containerd[1472]: time="2026-04-13T20:10:18.497075762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glg4w,Uid:49c0b7cf-67f2-43e0-b1b8-972c29e78e65,Namespace:kube-system,Attempt:0,}" Apr 13 20:10:18.516898 containerd[1472]: time="2026-04-13T20:10:18.516705541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75b5998949-fr9s5,Uid:92586f17-ea3a-4af3-aa4e-c720d02f8e41,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:18.525540 containerd[1472]: time="2026-04-13T20:10:18.525512039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c48779c-gk7jr,Uid:ab7d1268-0475-4d90-b5c2-1c8713e6aafb,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:18.534564 kubelet[2578]: E0413 20:10:18.532959 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:18.536516 containerd[1472]: time="2026-04-13T20:10:18.536444348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vdbdt,Uid:5778e305-4fb3-40cf-9eb5-2894d58c2771,Namespace:kube-system,Attempt:0,}" Apr 13 20:10:18.539339 containerd[1472]: time="2026-04-13T20:10:18.539312275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-dvmjq,Uid:3d4eb2d5-db0e-4d66-8113-637f0e2427c6,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:18.558175 containerd[1472]: time="2026-04-13T20:10:18.558046348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df68c9d4f-lx9f8,Uid:46975966-bb29-4145-9c1a-fe60aed66e16,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:18.558859 containerd[1472]: time="2026-04-13T20:10:18.558807634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df68c9d4f-jpz9w,Uid:41a53f62-b7f4-40f3-882b-8cc9702c76d5,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:18.668338 containerd[1472]: time="2026-04-13T20:10:18.668039785Z" level=info msg="CreateContainer within sandbox \"2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:10:18.768555 containerd[1472]: time="2026-04-13T20:10:18.768037748Z" level=info msg="CreateContainer within sandbox \"2b522c6d7e5a17eb29710984881d9de68a171667b6dbe8efad20f0e6a93d419c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f32a3687dfe2845ae4d3ef400d0cc5bc47532248408a9102a23197c5359b17d2\"" Apr 13 20:10:18.769702 containerd[1472]: time="2026-04-13T20:10:18.769633281Z" level=info msg="StartContainer for \"f32a3687dfe2845ae4d3ef400d0cc5bc47532248408a9102a23197c5359b17d2\"" Apr 13 20:10:18.834249 containerd[1472]: time="2026-04-13T20:10:18.834202879Z" level=error msg="Failed to destroy network for sandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.834824 containerd[1472]: time="2026-04-13T20:10:18.834797427Z" level=error msg="encountered an error cleaning up failed sandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.835109 containerd[1472]: time="2026-04-13T20:10:18.835084335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glg4w,Uid:49c0b7cf-67f2-43e0-b1b8-972c29e78e65,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.836273 kubelet[2578]: E0413 20:10:18.835394 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.836273 kubelet[2578]: E0413 20:10:18.835495 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-glg4w" Apr 13 20:10:18.836273 kubelet[2578]: E0413 20:10:18.835517 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-glg4w" Apr 13 20:10:18.837596 kubelet[2578]: E0413 20:10:18.835562 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-glg4w_kube-system(49c0b7cf-67f2-43e0-b1b8-972c29e78e65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-glg4w_kube-system(49c0b7cf-67f2-43e0-b1b8-972c29e78e65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-glg4w" podUID="49c0b7cf-67f2-43e0-b1b8-972c29e78e65" Apr 13 20:10:18.852766 containerd[1472]: time="2026-04-13T20:10:18.852721803Z" level=error msg="Failed to destroy network for sandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.853078 containerd[1472]: time="2026-04-13T20:10:18.853049082Z" level=error msg="encountered an error cleaning up failed sandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.853113 containerd[1472]: time="2026-04-13T20:10:18.853098421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c48779c-gk7jr,Uid:ab7d1268-0475-4d90-b5c2-1c8713e6aafb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.853927 kubelet[2578]: E0413 20:10:18.853255 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.853927 kubelet[2578]: E0413 20:10:18.853311 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c7c48779c-gk7jr" Apr 13 20:10:18.853927 kubelet[2578]: E0413 20:10:18.853330 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c7c48779c-gk7jr" Apr 13 20:10:18.854044 kubelet[2578]: E0413 20:10:18.853391 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c7c48779c-gk7jr_calico-system(ab7d1268-0475-4d90-b5c2-1c8713e6aafb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c7c48779c-gk7jr_calico-system(ab7d1268-0475-4d90-b5c2-1c8713e6aafb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c7c48779c-gk7jr" podUID="ab7d1268-0475-4d90-b5c2-1c8713e6aafb" Apr 13 20:10:18.863386 systemd[1]: Started cri-containerd-f32a3687dfe2845ae4d3ef400d0cc5bc47532248408a9102a23197c5359b17d2.scope - libcontainer container f32a3687dfe2845ae4d3ef400d0cc5bc47532248408a9102a23197c5359b17d2. Apr 13 20:10:18.895471 containerd[1472]: time="2026-04-13T20:10:18.895381034Z" level=error msg="Failed to destroy network for sandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.896186 containerd[1472]: time="2026-04-13T20:10:18.896131801Z" level=error msg="encountered an error cleaning up failed sandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.896332 containerd[1472]: time="2026-04-13T20:10:18.896308060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df68c9d4f-jpz9w,Uid:41a53f62-b7f4-40f3-882b-8cc9702c76d5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.897010 kubelet[2578]: E0413 20:10:18.896947 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.897073 kubelet[2578]: E0413 20:10:18.897022 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6df68c9d4f-jpz9w" Apr 13 20:10:18.897073 kubelet[2578]: E0413 20:10:18.897042 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6df68c9d4f-jpz9w" Apr 13 20:10:18.897139 kubelet[2578]: E0413 20:10:18.897092 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6df68c9d4f-jpz9w_calico-system(41a53f62-b7f4-40f3-882b-8cc9702c76d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6df68c9d4f-jpz9w_calico-system(41a53f62-b7f4-40f3-882b-8cc9702c76d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6df68c9d4f-jpz9w" podUID="41a53f62-b7f4-40f3-882b-8cc9702c76d5" Apr 13 20:10:18.898256 containerd[1472]: time="2026-04-13T20:10:18.898153491Z" level=error msg="Failed to destroy network for sandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.899136 containerd[1472]: time="2026-04-13T20:10:18.898629459Z" level=error msg="encountered an error cleaning up failed sandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.899136 containerd[1472]: time="2026-04-13T20:10:18.898677899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mzgf,Uid:b083f9b4-7da6-4a64-b37b-aa5d508c2e7f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.899236 kubelet[2578]: E0413 20:10:18.898913 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.899236 kubelet[2578]: E0413 20:10:18.898946 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4mzgf" Apr 13 20:10:18.899236 kubelet[2578]: E0413 20:10:18.898963 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4mzgf" Apr 13 20:10:18.899328 kubelet[2578]: E0413 20:10:18.899001 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4mzgf_calico-system(b083f9b4-7da6-4a64-b37b-aa5d508c2e7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4mzgf_calico-system(b083f9b4-7da6-4a64-b37b-aa5d508c2e7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4mzgf" podUID="b083f9b4-7da6-4a64-b37b-aa5d508c2e7f" Apr 13 20:10:18.927062 containerd[1472]: time="2026-04-13T20:10:18.926911987Z" level=error msg="Failed to destroy network for sandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.927530 containerd[1472]: time="2026-04-13T20:10:18.927474324Z" level=error msg="Failed to destroy network for sandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.927953 containerd[1472]: time="2026-04-13T20:10:18.927929232Z" level=error msg="encountered an error cleaning up failed sandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.928080 containerd[1472]: time="2026-04-13T20:10:18.928059132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75b5998949-fr9s5,Uid:92586f17-ea3a-4af3-aa4e-c720d02f8e41,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.928522 kubelet[2578]: E0413 20:10:18.928479 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.928576 containerd[1472]: time="2026-04-13T20:10:18.928502960Z" level=error msg="encountered an error cleaning up failed sandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.928576 containerd[1472]: time="2026-04-13T20:10:18.928544529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-dvmjq,Uid:3d4eb2d5-db0e-4d66-8113-637f0e2427c6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.928768 kubelet[2578]: E0413 20:10:18.928705 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.929044 kubelet[2578]: E0413 20:10:18.928728 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75b5998949-fr9s5" Apr 13 20:10:18.929044 kubelet[2578]: E0413 20:10:18.928896 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75b5998949-fr9s5" Apr 13 20:10:18.929044 kubelet[2578]: E0413 20:10:18.928737 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-dvmjq" Apr 13 20:10:18.929044 kubelet[2578]: E0413 20:10:18.928968 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-dvmjq" Apr 13 20:10:18.929159 kubelet[2578]: E0413 20:10:18.929000 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-dvmjq_calico-system(3d4eb2d5-db0e-4d66-8113-637f0e2427c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-dvmjq_calico-system(3d4eb2d5-db0e-4d66-8113-637f0e2427c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-dvmjq" podUID="3d4eb2d5-db0e-4d66-8113-637f0e2427c6" Apr 13 20:10:18.929773 kubelet[2578]: E0413 20:10:18.929601 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75b5998949-fr9s5_calico-system(92586f17-ea3a-4af3-aa4e-c720d02f8e41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75b5998949-fr9s5_calico-system(92586f17-ea3a-4af3-aa4e-c720d02f8e41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75b5998949-fr9s5" podUID="92586f17-ea3a-4af3-aa4e-c720d02f8e41" Apr 13 20:10:18.945116 containerd[1472]: time="2026-04-13T20:10:18.945083442Z" level=error msg="Failed to destroy network for sandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.946297 containerd[1472]: time="2026-04-13T20:10:18.946177057Z" level=error msg="encountered an error cleaning up failed sandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.946297 containerd[1472]: time="2026-04-13T20:10:18.946219557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df68c9d4f-lx9f8,Uid:46975966-bb29-4145-9c1a-fe60aed66e16,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.946533 kubelet[2578]: E0413 20:10:18.946455 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.946533 kubelet[2578]: E0413 20:10:18.946497 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6df68c9d4f-lx9f8" Apr 13 20:10:18.946533 kubelet[2578]: E0413 20:10:18.946514 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6df68c9d4f-lx9f8" Apr 13 20:10:18.946854 kubelet[2578]: E0413 20:10:18.946562 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6df68c9d4f-lx9f8_calico-system(46975966-bb29-4145-9c1a-fe60aed66e16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6df68c9d4f-lx9f8_calico-system(46975966-bb29-4145-9c1a-fe60aed66e16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6df68c9d4f-lx9f8" podUID="46975966-bb29-4145-9c1a-fe60aed66e16" Apr 13 20:10:18.947552 containerd[1472]: time="2026-04-13T20:10:18.947530171Z" level=error msg="Failed to destroy network for sandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.947870 containerd[1472]: time="2026-04-13T20:10:18.947848389Z" level=error msg="encountered an error cleaning up failed sandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.947987 containerd[1472]: time="2026-04-13T20:10:18.947925889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vdbdt,Uid:5778e305-4fb3-40cf-9eb5-2894d58c2771,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.948112 kubelet[2578]: E0413 20:10:18.948056 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:18.948147 kubelet[2578]: E0413 20:10:18.948124 2578 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vdbdt" Apr 13 20:10:18.948147 kubelet[2578]: E0413 20:10:18.948140 2578 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-vdbdt" Apr 13 20:10:18.948231 kubelet[2578]: E0413 20:10:18.948171 2578 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-vdbdt_kube-system(5778e305-4fb3-40cf-9eb5-2894d58c2771)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-vdbdt_kube-system(5778e305-4fb3-40cf-9eb5-2894d58c2771)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-vdbdt" podUID="5778e305-4fb3-40cf-9eb5-2894d58c2771" Apr 13 20:10:18.973151 containerd[1472]: time="2026-04-13T20:10:18.973109942Z" level=info msg="StartContainer for \"f32a3687dfe2845ae4d3ef400d0cc5bc47532248408a9102a23197c5359b17d2\" returns successfully" Apr 13 20:10:19.413040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6-shm.mount: Deactivated successfully. Apr 13 20:10:19.413172 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1-shm.mount: Deactivated successfully. Apr 13 20:10:19.618455 kubelet[2578]: I0413 20:10:19.616767 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:19.619386 containerd[1472]: time="2026-04-13T20:10:19.618663201Z" level=info msg="StopPodSandbox for \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\"" Apr 13 20:10:19.619386 containerd[1472]: time="2026-04-13T20:10:19.618893800Z" level=info msg="Ensure that sandbox ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c in task-service has been cleanup successfully" Apr 13 20:10:19.622242 kubelet[2578]: I0413 20:10:19.622218 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:19.625402 containerd[1472]: time="2026-04-13T20:10:19.625362161Z" level=info msg="StopPodSandbox for \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\"" Apr 13 20:10:19.625589 containerd[1472]: time="2026-04-13T20:10:19.625563831Z" level=info msg="Ensure that sandbox 1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2 in task-service has been cleanup successfully" Apr 13 20:10:19.626178 kubelet[2578]: I0413 20:10:19.626097 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:19.626974 containerd[1472]: time="2026-04-13T20:10:19.626550156Z" level=info msg="StopPodSandbox for \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\"" Apr 13 20:10:19.626974 containerd[1472]: time="2026-04-13T20:10:19.626714746Z" level=info msg="Ensure that sandbox 45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c in task-service has been cleanup successfully" Apr 13 20:10:19.633236 kubelet[2578]: I0413 20:10:19.633213 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:19.635618 containerd[1472]: time="2026-04-13T20:10:19.635590427Z" level=info msg="StopPodSandbox for \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\"" Apr 13 20:10:19.635784 containerd[1472]: time="2026-04-13T20:10:19.635762056Z" level=info msg="Ensure that sandbox 35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1 in task-service has been cleanup successfully" Apr 13 20:10:19.644172 kubelet[2578]: I0413 20:10:19.644128 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:19.650372 containerd[1472]: time="2026-04-13T20:10:19.648924428Z" level=info msg="StopPodSandbox for \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\"" Apr 13 20:10:19.650372 containerd[1472]: time="2026-04-13T20:10:19.649119898Z" level=info msg="Ensure that sandbox 3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578 in task-service has been cleanup successfully" Apr 13 20:10:19.652282 kubelet[2578]: I0413 20:10:19.652023 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8sv8w" podStartSLOduration=2.998972142 podStartE2EDuration="11.652007375s" podCreationTimestamp="2026-04-13 20:10:08 +0000 UTC" firstStartedPulling="2026-04-13 20:10:08.738589675 +0000 UTC m=+18.363124467" lastFinishedPulling="2026-04-13 20:10:17.391624898 +0000 UTC m=+27.016159700" observedRunningTime="2026-04-13 20:10:19.651007849 +0000 UTC m=+29.275542641" watchObservedRunningTime="2026-04-13 20:10:19.652007375 +0000 UTC m=+29.276542187" Apr 13 20:10:19.653921 kubelet[2578]: I0413 20:10:19.653877 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:19.656205 containerd[1472]: time="2026-04-13T20:10:19.656180427Z" level=info msg="StopPodSandbox for \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\"" Apr 13 20:10:19.656734 containerd[1472]: time="2026-04-13T20:10:19.656680614Z" level=info msg="Ensure that sandbox 215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30 in task-service has been cleanup successfully" Apr 13 20:10:19.664549 kubelet[2578]: I0413 20:10:19.664388 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:19.667821 containerd[1472]: time="2026-04-13T20:10:19.667334538Z" level=info msg="StopPodSandbox for \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\"" Apr 13 20:10:19.667821 containerd[1472]: time="2026-04-13T20:10:19.667549717Z" level=info msg="Ensure that sandbox 8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d in task-service has been cleanup successfully" Apr 13 20:10:19.670409 kubelet[2578]: I0413 20:10:19.670390 2578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:19.674982 containerd[1472]: time="2026-04-13T20:10:19.674856765Z" level=info msg="StopPodSandbox for \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\"" Apr 13 20:10:19.677307 containerd[1472]: time="2026-04-13T20:10:19.676807146Z" level=info msg="Ensure that sandbox 2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6 in task-service has been cleanup successfully" Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.813 [INFO][3748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.814 [INFO][3748] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" iface="eth0" netns="/var/run/netns/cni-e8d239b0-d6c7-d735-9367-726c9dbcb721" Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.814 [INFO][3748] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" iface="eth0" netns="/var/run/netns/cni-e8d239b0-d6c7-d735-9367-726c9dbcb721" Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.814 [INFO][3748] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" iface="eth0" netns="/var/run/netns/cni-e8d239b0-d6c7-d735-9367-726c9dbcb721" Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.814 [INFO][3748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.814 [INFO][3748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.948 [INFO][3802] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" HandleID="k8s-pod-network.35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.949 [INFO][3802] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.949 [INFO][3802] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.964 [WARNING][3802] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" HandleID="k8s-pod-network.35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.965 [INFO][3802] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" HandleID="k8s-pod-network.35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.972 [INFO][3802] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.001372 containerd[1472]: 2026-04-13 20:10:19.997 [INFO][3748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:20.002006 containerd[1472]: time="2026-04-13T20:10:20.001981465Z" level=info msg="TearDown network for sandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\" successfully" Apr 13 20:10:20.002109 containerd[1472]: time="2026-04-13T20:10:20.002080074Z" level=info msg="StopPodSandbox for \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\" returns successfully" Apr 13 20:10:20.006326 systemd[1]: run-netns-cni\x2de8d239b0\x2dd6c7\x2dd735\x2d9367\x2d726c9dbcb721.mount: Deactivated successfully. Apr 13 20:10:20.013887 containerd[1472]: time="2026-04-13T20:10:20.013594467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mzgf,Uid:b083f9b4-7da6-4a64-b37b-aa5d508c2e7f,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:19.816 [INFO][3788] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:19.816 [INFO][3788] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" iface="eth0" netns="/var/run/netns/cni-11ab330c-7fed-b29b-de47-6c00027d4592" Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:19.817 [INFO][3788] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" iface="eth0" netns="/var/run/netns/cni-11ab330c-7fed-b29b-de47-6c00027d4592" Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:19.817 [INFO][3788] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" iface="eth0" netns="/var/run/netns/cni-11ab330c-7fed-b29b-de47-6c00027d4592" Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:19.817 [INFO][3788] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:19.817 [INFO][3788] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:19.951 [INFO][3804] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" HandleID="k8s-pod-network.2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:19.951 [INFO][3804] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:19.981 [INFO][3804] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:20.000 [WARNING][3804] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" HandleID="k8s-pod-network.2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:20.000 [INFO][3804] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" HandleID="k8s-pod-network.2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:20.005 [INFO][3804] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.040694 containerd[1472]: 2026-04-13 20:10:20.022 [INFO][3788] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:20.043595 containerd[1472]: time="2026-04-13T20:10:20.041784512Z" level=info msg="TearDown network for sandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\" successfully" Apr 13 20:10:20.043595 containerd[1472]: time="2026-04-13T20:10:20.041817862Z" level=info msg="StopPodSandbox for \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\" returns successfully" Apr 13 20:10:20.044819 kubelet[2578]: E0413 20:10:20.044801 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:20.049329 systemd[1]: run-netns-cni\x2d11ab330c\x2d7fed\x2db29b\x2dde47\x2d6c00027d4592.mount: Deactivated successfully. Apr 13 20:10:20.052727 containerd[1472]: time="2026-04-13T20:10:20.052584567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glg4w,Uid:49c0b7cf-67f2-43e0-b1b8-972c29e78e65,Namespace:kube-system,Attempt:1,}" Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:19.931 [INFO][3776] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:19.931 [INFO][3776] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" iface="eth0" netns="/var/run/netns/cni-e3f6193e-32d2-525f-dcda-44120b0c5680" Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:19.932 [INFO][3776] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" iface="eth0" netns="/var/run/netns/cni-e3f6193e-32d2-525f-dcda-44120b0c5680" Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:19.933 [INFO][3776] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" iface="eth0" netns="/var/run/netns/cni-e3f6193e-32d2-525f-dcda-44120b0c5680" Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:19.933 [INFO][3776] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:19.933 [INFO][3776] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:20.011 [INFO][3839] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" HandleID="k8s-pod-network.3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:20.011 [INFO][3839] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:20.011 [INFO][3839] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:20.029 [WARNING][3839] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" HandleID="k8s-pod-network.3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:20.029 [INFO][3839] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" HandleID="k8s-pod-network.3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:20.034 [INFO][3839] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.083915 containerd[1472]: 2026-04-13 20:10:20.051 [INFO][3776] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:20.083915 containerd[1472]: time="2026-04-13T20:10:20.083779500Z" level=info msg="TearDown network for sandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\" successfully" Apr 13 20:10:20.083915 containerd[1472]: time="2026-04-13T20:10:20.083801809Z" level=info msg="StopPodSandbox for \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\" returns successfully" Apr 13 20:10:20.089874 containerd[1472]: time="2026-04-13T20:10:20.089595496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c48779c-gk7jr,Uid:ab7d1268-0475-4d90-b5c2-1c8713e6aafb,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:19.869 [INFO][3731] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:19.869 [INFO][3731] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" iface="eth0" netns="/var/run/netns/cni-9a64de01-b952-6ef0-3182-4d8ff6374ac3" Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:19.870 [INFO][3731] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" iface="eth0" netns="/var/run/netns/cni-9a64de01-b952-6ef0-3182-4d8ff6374ac3" Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:19.875 [INFO][3731] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" iface="eth0" netns="/var/run/netns/cni-9a64de01-b952-6ef0-3182-4d8ff6374ac3" Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:19.875 [INFO][3731] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:19.875 [INFO][3731] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:20.039 [INFO][3823] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" HandleID="k8s-pod-network.45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:20.039 [INFO][3823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:20.039 [INFO][3823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:20.068 [WARNING][3823] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" HandleID="k8s-pod-network.45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:20.068 [INFO][3823] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" HandleID="k8s-pod-network.45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:20.072 [INFO][3823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.115469 containerd[1472]: 2026-04-13 20:10:20.099 [INFO][3731] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:20.118257 containerd[1472]: time="2026-04-13T20:10:20.118153619Z" level=info msg="TearDown network for sandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\" successfully" Apr 13 20:10:20.118627 containerd[1472]: time="2026-04-13T20:10:20.118512917Z" level=info msg="StopPodSandbox for \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\" returns successfully" Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:19.859 [INFO][3719] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:19.867 [INFO][3719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" iface="eth0" netns="/var/run/netns/cni-6e4511a5-18fa-2eff-b04c-0469729b032d" Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:19.868 [INFO][3719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" iface="eth0" netns="/var/run/netns/cni-6e4511a5-18fa-2eff-b04c-0469729b032d" Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:19.873 [INFO][3719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" iface="eth0" netns="/var/run/netns/cni-6e4511a5-18fa-2eff-b04c-0469729b032d" Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:19.873 [INFO][3719] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:19.873 [INFO][3719] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:20.047 [INFO][3822] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" HandleID="k8s-pod-network.ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:20.047 [INFO][3822] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:20.080 [INFO][3822] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:20.099 [WARNING][3822] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" HandleID="k8s-pod-network.ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:20.099 [INFO][3822] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" HandleID="k8s-pod-network.ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:20.102 [INFO][3822] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.120709 containerd[1472]: 2026-04-13 20:10:20.111 [INFO][3719] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:20.120995 kubelet[2578]: E0413 20:10:20.120255 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:20.122577 containerd[1472]: time="2026-04-13T20:10:20.122070813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vdbdt,Uid:5778e305-4fb3-40cf-9eb5-2894d58c2771,Namespace:kube-system,Attempt:1,}" Apr 13 20:10:20.124995 containerd[1472]: time="2026-04-13T20:10:20.124972211Z" level=info msg="TearDown network for sandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\" successfully" Apr 13 20:10:20.125335 containerd[1472]: time="2026-04-13T20:10:20.125318779Z" level=info msg="StopPodSandbox for \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\" returns successfully" Apr 13 20:10:20.128284 containerd[1472]: time="2026-04-13T20:10:20.128262667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df68c9d4f-jpz9w,Uid:41a53f62-b7f4-40f3-882b-8cc9702c76d5,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:19.930 [INFO][3772] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:19.930 [INFO][3772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" iface="eth0" netns="/var/run/netns/cni-b7c52e44-d21f-2d1e-2b61-01e888585c03" Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:19.931 [INFO][3772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" iface="eth0" netns="/var/run/netns/cni-b7c52e44-d21f-2d1e-2b61-01e888585c03" Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:19.931 [INFO][3772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" iface="eth0" netns="/var/run/netns/cni-b7c52e44-d21f-2d1e-2b61-01e888585c03" Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:19.931 [INFO][3772] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:19.931 [INFO][3772] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:20.131 [INFO][3843] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" HandleID="k8s-pod-network.215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:20.131 [INFO][3843] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:20.131 [INFO][3843] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:20.144 [WARNING][3843] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" HandleID="k8s-pod-network.215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:20.144 [INFO][3843] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" HandleID="k8s-pod-network.215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:20.146 [INFO][3843] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.184501 containerd[1472]: 2026-04-13 20:10:20.165 [INFO][3772] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:20.185060 containerd[1472]: time="2026-04-13T20:10:20.185035294Z" level=info msg="TearDown network for sandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\" successfully" Apr 13 20:10:20.185115 containerd[1472]: time="2026-04-13T20:10:20.185102184Z" level=info msg="StopPodSandbox for \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\" returns successfully" Apr 13 20:10:20.189002 containerd[1472]: time="2026-04-13T20:10:20.188979368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-dvmjq,Uid:3d4eb2d5-db0e-4d66-8113-637f0e2427c6,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:19.965 [INFO][3785] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:19.966 [INFO][3785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" iface="eth0" netns="/var/run/netns/cni-3240daa9-1757-c994-8978-7c066e2aa057" Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:19.967 [INFO][3785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" iface="eth0" netns="/var/run/netns/cni-3240daa9-1757-c994-8978-7c066e2aa057" Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:19.968 [INFO][3785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" iface="eth0" netns="/var/run/netns/cni-3240daa9-1757-c994-8978-7c066e2aa057" Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:19.969 [INFO][3785] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:19.969 [INFO][3785] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:20.142 [INFO][3852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" HandleID="k8s-pod-network.8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Workload="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:20.142 [INFO][3852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:20.146 [INFO][3852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:20.160 [WARNING][3852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" HandleID="k8s-pod-network.8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Workload="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:20.160 [INFO][3852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" HandleID="k8s-pod-network.8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Workload="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:20.164 [INFO][3852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.197816 containerd[1472]: 2026-04-13 20:10:20.184 [INFO][3785] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:20.198629 containerd[1472]: time="2026-04-13T20:10:20.198297510Z" level=info msg="TearDown network for sandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\" successfully" Apr 13 20:10:20.198629 containerd[1472]: time="2026-04-13T20:10:20.198321370Z" level=info msg="StopPodSandbox for \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\" returns successfully" Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:19.909 [INFO][3741] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:19.909 [INFO][3741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" iface="eth0" netns="/var/run/netns/cni-8c95677d-3c93-f6b7-e36f-d5f61faf22a8" Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:19.910 [INFO][3741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" iface="eth0" netns="/var/run/netns/cni-8c95677d-3c93-f6b7-e36f-d5f61faf22a8" Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:19.911 [INFO][3741] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" iface="eth0" netns="/var/run/netns/cni-8c95677d-3c93-f6b7-e36f-d5f61faf22a8" Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:19.911 [INFO][3741] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:19.911 [INFO][3741] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:20.131 [INFO][3832] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" HandleID="k8s-pod-network.1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:20.145 [INFO][3832] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:20.169 [INFO][3832] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:20.183 [WARNING][3832] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" HandleID="k8s-pod-network.1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:20.183 [INFO][3832] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" HandleID="k8s-pod-network.1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:20.187 [INFO][3832] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.225369 containerd[1472]: 2026-04-13 20:10:20.200 [INFO][3741] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:20.226268 containerd[1472]: time="2026-04-13T20:10:20.225961777Z" level=info msg="TearDown network for sandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\" successfully" Apr 13 20:10:20.226268 containerd[1472]: time="2026-04-13T20:10:20.226004926Z" level=info msg="StopPodSandbox for \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\" returns successfully" Apr 13 20:10:20.228400 containerd[1472]: time="2026-04-13T20:10:20.228371167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df68c9d4f-lx9f8,Uid:46975966-bb29-4145-9c1a-fe60aed66e16,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:20.291826 kubelet[2578]: I0413 20:10:20.287844 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4gdb\" (UniqueName: \"kubernetes.io/projected/92586f17-ea3a-4af3-aa4e-c720d02f8e41-kube-api-access-g4gdb\") pod \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\" (UID: \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\") " Apr 13 20:10:20.291826 kubelet[2578]: I0413 20:10:20.287896 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92586f17-ea3a-4af3-aa4e-c720d02f8e41-whisker-ca-bundle\") pod \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\" (UID: \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\") " Apr 13 20:10:20.291826 kubelet[2578]: I0413 20:10:20.287918 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92586f17-ea3a-4af3-aa4e-c720d02f8e41-whisker-backend-key-pair\") pod \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\" (UID: \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\") " Apr 13 20:10:20.291826 kubelet[2578]: I0413 20:10:20.287952 2578 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/92586f17-ea3a-4af3-aa4e-c720d02f8e41-nginx-config\") pod \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\" (UID: \"92586f17-ea3a-4af3-aa4e-c720d02f8e41\") " Apr 13 20:10:20.291826 kubelet[2578]: I0413 20:10:20.288512 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92586f17-ea3a-4af3-aa4e-c720d02f8e41-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "92586f17-ea3a-4af3-aa4e-c720d02f8e41" (UID: "92586f17-ea3a-4af3-aa4e-c720d02f8e41"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:10:20.296782 kubelet[2578]: I0413 20:10:20.296750 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92586f17-ea3a-4af3-aa4e-c720d02f8e41-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "92586f17-ea3a-4af3-aa4e-c720d02f8e41" (UID: "92586f17-ea3a-4af3-aa4e-c720d02f8e41"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:10:20.298505 kubelet[2578]: I0413 20:10:20.298475 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92586f17-ea3a-4af3-aa4e-c720d02f8e41-kube-api-access-g4gdb" (OuterVolumeSpecName: "kube-api-access-g4gdb") pod "92586f17-ea3a-4af3-aa4e-c720d02f8e41" (UID: "92586f17-ea3a-4af3-aa4e-c720d02f8e41"). InnerVolumeSpecName "kube-api-access-g4gdb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:10:20.309816 kubelet[2578]: I0413 20:10:20.309779 2578 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92586f17-ea3a-4af3-aa4e-c720d02f8e41-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "92586f17-ea3a-4af3-aa4e-c720d02f8e41" (UID: "92586f17-ea3a-4af3-aa4e-c720d02f8e41"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:10:20.389508 kubelet[2578]: I0413 20:10:20.389268 2578 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/92586f17-ea3a-4af3-aa4e-c720d02f8e41-nginx-config\") on node \"172-239-193-191\" DevicePath \"\"" Apr 13 20:10:20.389508 kubelet[2578]: I0413 20:10:20.389301 2578 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g4gdb\" (UniqueName: \"kubernetes.io/projected/92586f17-ea3a-4af3-aa4e-c720d02f8e41-kube-api-access-g4gdb\") on node \"172-239-193-191\" DevicePath \"\"" Apr 13 20:10:20.389508 kubelet[2578]: I0413 20:10:20.389313 2578 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92586f17-ea3a-4af3-aa4e-c720d02f8e41-whisker-ca-bundle\") on node \"172-239-193-191\" DevicePath \"\"" Apr 13 20:10:20.389508 kubelet[2578]: I0413 20:10:20.389324 2578 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/92586f17-ea3a-4af3-aa4e-c720d02f8e41-whisker-backend-key-pair\") on node \"172-239-193-191\" DevicePath \"\"" Apr 13 20:10:20.422124 systemd[1]: run-netns-cni\x2d9a64de01\x2db952\x2d6ef0\x2d3182\x2d4d8ff6374ac3.mount: Deactivated successfully. Apr 13 20:10:20.422227 systemd[1]: run-netns-cni\x2d8c95677d\x2d3c93\x2df6b7\x2de36f\x2dd5f61faf22a8.mount: Deactivated successfully. Apr 13 20:10:20.422298 systemd[1]: run-netns-cni\x2db7c52e44\x2dd21f\x2d2d1e\x2d2b61\x2d01e888585c03.mount: Deactivated successfully. Apr 13 20:10:20.422369 systemd[1]: run-netns-cni\x2d6e4511a5\x2d18fa\x2d2eff\x2db04c\x2d0469729b032d.mount: Deactivated successfully. Apr 13 20:10:20.423145 systemd-networkd[1382]: cali5c1ba5c7073: Link UP Apr 13 20:10:20.425265 systemd[1]: run-netns-cni\x2de3f6193e\x2d32d2\x2d525f\x2ddcda\x2d44120b0c5680.mount: Deactivated successfully. Apr 13 20:10:20.425355 systemd[1]: run-netns-cni\x2d3240daa9\x2d1757\x2dc994\x2d8978\x2d7c066e2aa057.mount: Deactivated successfully. Apr 13 20:10:20.425705 systemd[1]: var-lib-kubelet-pods-92586f17\x2dea3a\x2d4af3\x2daa4e\x2dc720d02f8e41-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg4gdb.mount: Deactivated successfully. Apr 13 20:10:20.425782 systemd[1]: var-lib-kubelet-pods-92586f17\x2dea3a\x2d4af3\x2daa4e\x2dc720d02f8e41-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:10:20.431587 systemd-networkd[1382]: cali5c1ba5c7073: Gained carrier Apr 13 20:10:20.518083 systemd[1]: Removed slice kubepods-besteffort-pod92586f17_ea3a_4af3_aa4e_c720d02f8e41.slice - libcontainer container kubepods-besteffort-pod92586f17_ea3a_4af3_aa4e_c720d02f8e41.slice. Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.192 [ERROR][3868] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.221 [INFO][3868] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0 coredns-66bc5c9577- kube-system 49c0b7cf-67f2-43e0-b1b8-972c29e78e65 930 0 2026-04-13 20:09:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-193-191 coredns-66bc5c9577-glg4w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5c1ba5c7073 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Namespace="kube-system" Pod="coredns-66bc5c9577-glg4w" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.221 [INFO][3868] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Namespace="kube-system" Pod="coredns-66bc5c9577-glg4w" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.318 [INFO][3932] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" HandleID="k8s-pod-network.f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.333 [INFO][3932] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" HandleID="k8s-pod-network.f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fea0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-193-191", "pod":"coredns-66bc5c9577-glg4w", "timestamp":"2026-04-13 20:10:20.318624617 +0000 UTC"}, Hostname:"172-239-193-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b2dc0)} Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.333 [INFO][3932] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.333 [INFO][3932] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.333 [INFO][3932] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-191' Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.340 [INFO][3932] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" host="172-239-193-191" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.350 [INFO][3932] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-191" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.356 [INFO][3932] ipam/ipam.go 526: Trying affinity for 192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.358 [INFO][3932] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.362 [INFO][3932] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.362 [INFO][3932] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" host="172-239-193-191" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.364 [INFO][3932] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32 Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.371 [INFO][3932] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" host="172-239-193-191" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.377 [INFO][3932] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.129/26] block=192.168.91.128/26 handle="k8s-pod-network.f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" host="172-239-193-191" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.378 [INFO][3932] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.129/26] handle="k8s-pod-network.f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" host="172-239-193-191" Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.379 [INFO][3932] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.548798 containerd[1472]: 2026-04-13 20:10:20.379 [INFO][3932] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.129/26] IPv6=[] ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" HandleID="k8s-pod-network.f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:20.549335 containerd[1472]: 2026-04-13 20:10:20.388 [INFO][3868] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Namespace="kube-system" Pod="coredns-66bc5c9577-glg4w" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"49c0b7cf-67f2-43e0-b1b8-972c29e78e65", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"", Pod:"coredns-66bc5c9577-glg4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c1ba5c7073", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.549335 containerd[1472]: 2026-04-13 20:10:20.388 [INFO][3868] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.129/32] ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Namespace="kube-system" Pod="coredns-66bc5c9577-glg4w" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:20.549335 containerd[1472]: 2026-04-13 20:10:20.388 [INFO][3868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c1ba5c7073 ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Namespace="kube-system" Pod="coredns-66bc5c9577-glg4w" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:20.549335 containerd[1472]: 2026-04-13 20:10:20.438 [INFO][3868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Namespace="kube-system" Pod="coredns-66bc5c9577-glg4w" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:20.549335 containerd[1472]: 2026-04-13 20:10:20.450 [INFO][3868] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Namespace="kube-system" Pod="coredns-66bc5c9577-glg4w" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"49c0b7cf-67f2-43e0-b1b8-972c29e78e65", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32", Pod:"coredns-66bc5c9577-glg4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c1ba5c7073", MAC:"c2:d6:aa:3a:e3:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.549335 containerd[1472]: 2026-04-13 20:10:20.501 [INFO][3868] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32" Namespace="kube-system" Pod="coredns-66bc5c9577-glg4w" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:20.636710 containerd[1472]: time="2026-04-13T20:10:20.636087735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:20.636710 containerd[1472]: time="2026-04-13T20:10:20.636288734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:20.636710 containerd[1472]: time="2026-04-13T20:10:20.636306394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:20.636710 containerd[1472]: time="2026-04-13T20:10:20.636495753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:20.676831 systemd-networkd[1382]: cali4e6b0479f2b: Link UP Apr 13 20:10:20.679519 systemd-networkd[1382]: cali4e6b0479f2b: Gained carrier Apr 13 20:10:20.691449 kubelet[2578]: I0413 20:10:20.690631 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:20.751010 systemd[1]: Started cri-containerd-f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32.scope - libcontainer container f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32. Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.228 [ERROR][3898] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.251 [INFO][3898] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0 calico-apiserver-6df68c9d4f- calico-system 41a53f62-b7f4-40f3-882b-8cc9702c76d5 932 0 2026-04-13 20:10:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6df68c9d4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-193-191 calico-apiserver-6df68c9d4f-jpz9w eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4e6b0479f2b [] [] }} ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-jpz9w" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.251 [INFO][3898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-jpz9w" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.336 [INFO][3946] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" HandleID="k8s-pod-network.7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.350 [INFO][3946] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" HandleID="k8s-pod-network.7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122320), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-191", "pod":"calico-apiserver-6df68c9d4f-jpz9w", "timestamp":"2026-04-13 20:10:20.336908752 +0000 UTC"}, Hostname:"172-239-193-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d8420)} Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.350 [INFO][3946] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.378 [INFO][3946] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.379 [INFO][3946] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-191' Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.450 [INFO][3946] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" host="172-239-193-191" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.474 [INFO][3946] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-191" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.501 [INFO][3946] ipam/ipam.go 526: Trying affinity for 192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.537 [INFO][3946] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.560 [INFO][3946] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.561 [INFO][3946] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" host="172-239-193-191" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.569 [INFO][3946] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3 Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.575 [INFO][3946] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" host="172-239-193-191" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.597 [INFO][3946] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.130/26] block=192.168.91.128/26 handle="k8s-pod-network.7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" host="172-239-193-191" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.597 [INFO][3946] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.130/26] handle="k8s-pod-network.7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" host="172-239-193-191" Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.598 [INFO][3946] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:20.763650 containerd[1472]: 2026-04-13 20:10:20.598 [INFO][3946] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.130/26] IPv6=[] ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" HandleID="k8s-pod-network.7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:20.766267 containerd[1472]: 2026-04-13 20:10:20.654 [INFO][3898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-jpz9w" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0", GenerateName:"calico-apiserver-6df68c9d4f-", Namespace:"calico-system", SelfLink:"", UID:"41a53f62-b7f4-40f3-882b-8cc9702c76d5", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df68c9d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"", Pod:"calico-apiserver-6df68c9d4f-jpz9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4e6b0479f2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.766267 containerd[1472]: 2026-04-13 20:10:20.654 [INFO][3898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.130/32] ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-jpz9w" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:20.766267 containerd[1472]: 2026-04-13 20:10:20.654 [INFO][3898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e6b0479f2b ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-jpz9w" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:20.766267 containerd[1472]: 2026-04-13 20:10:20.680 [INFO][3898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-jpz9w" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:20.766267 containerd[1472]: 2026-04-13 20:10:20.683 [INFO][3898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-jpz9w" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0", GenerateName:"calico-apiserver-6df68c9d4f-", Namespace:"calico-system", SelfLink:"", UID:"41a53f62-b7f4-40f3-882b-8cc9702c76d5", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df68c9d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3", Pod:"calico-apiserver-6df68c9d4f-jpz9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4e6b0479f2b", MAC:"86:1a:0c:df:48:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:20.766267 containerd[1472]: 2026-04-13 20:10:20.714 [INFO][3898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-jpz9w" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:20.835833 systemd[1]: Created slice kubepods-besteffort-podb6e5d9c6_9877_40bb_9431_db7ead187c0a.slice - libcontainer container kubepods-besteffort-podb6e5d9c6_9877_40bb_9431_db7ead187c0a.slice. Apr 13 20:10:20.895617 kubelet[2578]: I0413 20:10:20.895578 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6e5d9c6-9877-40bb-9431-db7ead187c0a-whisker-ca-bundle\") pod \"whisker-f7c875498-2d6k7\" (UID: \"b6e5d9c6-9877-40bb-9431-db7ead187c0a\") " pod="calico-system/whisker-f7c875498-2d6k7" Apr 13 20:10:20.895849 kubelet[2578]: I0413 20:10:20.895832 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbtqx\" (UniqueName: \"kubernetes.io/projected/b6e5d9c6-9877-40bb-9431-db7ead187c0a-kube-api-access-xbtqx\") pod \"whisker-f7c875498-2d6k7\" (UID: \"b6e5d9c6-9877-40bb-9431-db7ead187c0a\") " pod="calico-system/whisker-f7c875498-2d6k7" Apr 13 20:10:20.895955 kubelet[2578]: I0413 20:10:20.895941 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b6e5d9c6-9877-40bb-9431-db7ead187c0a-nginx-config\") pod \"whisker-f7c875498-2d6k7\" (UID: \"b6e5d9c6-9877-40bb-9431-db7ead187c0a\") " pod="calico-system/whisker-f7c875498-2d6k7" Apr 13 20:10:20.896043 kubelet[2578]: I0413 20:10:20.896031 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b6e5d9c6-9877-40bb-9431-db7ead187c0a-whisker-backend-key-pair\") pod \"whisker-f7c875498-2d6k7\" (UID: \"b6e5d9c6-9877-40bb-9431-db7ead187c0a\") " pod="calico-system/whisker-f7c875498-2d6k7" Apr 13 20:10:20.933897 containerd[1472]: time="2026-04-13T20:10:20.933824735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:20.934273 containerd[1472]: time="2026-04-13T20:10:20.934247893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:20.934498 containerd[1472]: time="2026-04-13T20:10:20.934341842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:20.935136 containerd[1472]: time="2026-04-13T20:10:20.934806830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:20.967316 containerd[1472]: time="2026-04-13T20:10:20.967245448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-glg4w,Uid:49c0b7cf-67f2-43e0-b1b8-972c29e78e65,Namespace:kube-system,Attempt:1,} returns sandbox id \"f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32\"" Apr 13 20:10:20.971350 kubelet[2578]: E0413 20:10:20.970805 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:20.980047 containerd[1472]: time="2026-04-13T20:10:20.979998325Z" level=info msg="CreateContainer within sandbox \"f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:10:20.994868 containerd[1472]: time="2026-04-13T20:10:20.994838704Z" level=info msg="CreateContainer within sandbox \"f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5755ebf21a3a84e9c5e72382729b9413cdda9cb320d11459f5ddeabb033c58b5\"" Apr 13 20:10:21.001255 containerd[1472]: time="2026-04-13T20:10:21.000544771Z" level=info msg="StartContainer for \"5755ebf21a3a84e9c5e72382729b9413cdda9cb320d11459f5ddeabb033c58b5\"" Apr 13 20:10:21.028737 systemd-networkd[1382]: cali4c18fb6d362: Link UP Apr 13 20:10:21.034032 systemd-networkd[1382]: cali4c18fb6d362: Gained carrier Apr 13 20:10:21.046586 systemd[1]: Started cri-containerd-7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3.scope - libcontainer container 7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3. Apr 13 20:10:21.065352 systemd[1]: Started cri-containerd-5755ebf21a3a84e9c5e72382729b9413cdda9cb320d11459f5ddeabb033c58b5.scope - libcontainer container 5755ebf21a3a84e9c5e72382729b9413cdda9cb320d11459f5ddeabb033c58b5. Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.362 [ERROR][3920] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.458 [INFO][3920] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0 goldmane-cccfbd5cf- calico-system 3d4eb2d5-db0e-4d66-8113-637f0e2427c6 935 0 2026-04-13 20:10:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-193-191 goldmane-cccfbd5cf-dvmjq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4c18fb6d362 [] [] }} ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dvmjq" WorkloadEndpoint="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.458 [INFO][3920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dvmjq" WorkloadEndpoint="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.826 [INFO][4038] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" HandleID="k8s-pod-network.5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.885 [INFO][4038] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" HandleID="k8s-pod-network.5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039cf80), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-191", "pod":"goldmane-cccfbd5cf-dvmjq", "timestamp":"2026-04-13 20:10:20.826744293 +0000 UTC"}, Hostname:"172-239-193-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000474580)} Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.885 [INFO][4038] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.885 [INFO][4038] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.885 [INFO][4038] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-191' Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.896 [INFO][4038] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" host="172-239-193-191" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.922 [INFO][4038] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-191" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.938 [INFO][4038] ipam/ipam.go 526: Trying affinity for 192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.940 [INFO][4038] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.947 [INFO][4038] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.948 [INFO][4038] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" host="172-239-193-191" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.952 [INFO][4038] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1 Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.958 [INFO][4038] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" host="172-239-193-191" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.969 [INFO][4038] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.131/26] block=192.168.91.128/26 handle="k8s-pod-network.5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" host="172-239-193-191" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.969 [INFO][4038] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.131/26] handle="k8s-pod-network.5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" host="172-239-193-191" Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.969 [INFO][4038] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:21.090721 containerd[1472]: 2026-04-13 20:10:20.969 [INFO][4038] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.131/26] IPv6=[] ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" HandleID="k8s-pod-network.5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:21.091507 containerd[1472]: 2026-04-13 20:10:20.985 [INFO][3920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dvmjq" WorkloadEndpoint="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"3d4eb2d5-db0e-4d66-8113-637f0e2427c6", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"", Pod:"goldmane-cccfbd5cf-dvmjq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c18fb6d362", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.091507 containerd[1472]: 2026-04-13 20:10:20.986 [INFO][3920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.131/32] ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dvmjq" WorkloadEndpoint="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:21.091507 containerd[1472]: 2026-04-13 20:10:20.986 [INFO][3920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c18fb6d362 ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dvmjq" WorkloadEndpoint="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:21.091507 containerd[1472]: 2026-04-13 20:10:21.043 [INFO][3920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dvmjq" WorkloadEndpoint="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:21.091507 containerd[1472]: 2026-04-13 20:10:21.044 [INFO][3920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dvmjq" WorkloadEndpoint="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"3d4eb2d5-db0e-4d66-8113-637f0e2427c6", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1", Pod:"goldmane-cccfbd5cf-dvmjq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c18fb6d362", MAC:"16:30:5b:e7:bd:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.091507 containerd[1472]: 2026-04-13 20:10:21.085 [INFO][3920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dvmjq" WorkloadEndpoint="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:21.147599 containerd[1472]: time="2026-04-13T20:10:21.147536026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7c875498-2d6k7,Uid:b6e5d9c6-9877-40bb-9431-db7ead187c0a,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:21.162488 containerd[1472]: time="2026-04-13T20:10:21.161377433Z" level=info msg="StartContainer for \"5755ebf21a3a84e9c5e72382729b9413cdda9cb320d11459f5ddeabb033c58b5\" returns successfully" Apr 13 20:10:21.167499 containerd[1472]: time="2026-04-13T20:10:21.167009641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:21.167499 containerd[1472]: time="2026-04-13T20:10:21.167065111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:21.167499 containerd[1472]: time="2026-04-13T20:10:21.167076481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.167499 containerd[1472]: time="2026-04-13T20:10:21.167165251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.200020 systemd-networkd[1382]: calie8206b42745: Link UP Apr 13 20:10:21.202683 systemd-networkd[1382]: calie8206b42745: Gained carrier Apr 13 20:10:21.243579 systemd[1]: Started cri-containerd-5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1.scope - libcontainer container 5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1. Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:20.264 [ERROR][3861] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:20.357 [INFO][3861] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--191-k8s-csi--node--driver--4mzgf-eth0 csi-node-driver- calico-system b083f9b4-7da6-4a64-b37b-aa5d508c2e7f 929 0 2026-04-13 20:10:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-193-191 csi-node-driver-4mzgf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie8206b42745 [] [] }} ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Namespace="calico-system" Pod="csi-node-driver-4mzgf" WorkloadEndpoint="172--239--193--191-k8s-csi--node--driver--4mzgf-" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:20.357 [INFO][3861] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Namespace="calico-system" Pod="csi-node-driver-4mzgf" WorkloadEndpoint="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:20.843 [INFO][4001] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" HandleID="k8s-pod-network.573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:20.891 [INFO][4001] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" HandleID="k8s-pod-network.573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006318d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-191", "pod":"csi-node-driver-4mzgf", "timestamp":"2026-04-13 20:10:20.843790304 +0000 UTC"}, Hostname:"172-239-193-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00022edc0)} Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:20.891 [INFO][4001] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:20.971 [INFO][4001] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:20.972 [INFO][4001] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-191' Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:20.999 [INFO][4001] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" host="172-239-193-191" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.035 [INFO][4001] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-191" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.055 [INFO][4001] ipam/ipam.go 526: Trying affinity for 192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.068 [INFO][4001] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.078 [INFO][4001] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.079 [INFO][4001] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" host="172-239-193-191" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.103 [INFO][4001] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530 Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.129 [INFO][4001] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" host="172-239-193-191" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.180 [INFO][4001] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.132/26] block=192.168.91.128/26 handle="k8s-pod-network.573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" host="172-239-193-191" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.180 [INFO][4001] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.132/26] handle="k8s-pod-network.573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" host="172-239-193-191" Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.182 [INFO][4001] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:21.286385 containerd[1472]: 2026-04-13 20:10:21.182 [INFO][4001] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.132/26] IPv6=[] ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" HandleID="k8s-pod-network.573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:21.287261 containerd[1472]: 2026-04-13 20:10:21.192 [INFO][3861] cni-plugin/k8s.go 418: Populated endpoint ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Namespace="calico-system" Pod="csi-node-driver-4mzgf" WorkloadEndpoint="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-csi--node--driver--4mzgf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b083f9b4-7da6-4a64-b37b-aa5d508c2e7f", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"", Pod:"csi-node-driver-4mzgf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8206b42745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.287261 containerd[1472]: 2026-04-13 20:10:21.193 [INFO][3861] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.132/32] ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Namespace="calico-system" Pod="csi-node-driver-4mzgf" WorkloadEndpoint="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:21.287261 containerd[1472]: 2026-04-13 20:10:21.193 [INFO][3861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8206b42745 ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Namespace="calico-system" Pod="csi-node-driver-4mzgf" WorkloadEndpoint="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:21.287261 containerd[1472]: 2026-04-13 20:10:21.204 [INFO][3861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Namespace="calico-system" Pod="csi-node-driver-4mzgf" WorkloadEndpoint="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:21.287261 containerd[1472]: 2026-04-13 20:10:21.223 [INFO][3861] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Namespace="calico-system" Pod="csi-node-driver-4mzgf" WorkloadEndpoint="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-csi--node--driver--4mzgf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b083f9b4-7da6-4a64-b37b-aa5d508c2e7f", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530", Pod:"csi-node-driver-4mzgf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8206b42745", MAC:"42:af:0a:9c:5e:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.287261 containerd[1472]: 2026-04-13 20:10:21.276 [INFO][3861] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530" Namespace="calico-system" Pod="csi-node-driver-4mzgf" WorkloadEndpoint="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:21.347657 containerd[1472]: time="2026-04-13T20:10:21.345244086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:21.347657 containerd[1472]: time="2026-04-13T20:10:21.345323156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:21.347657 containerd[1472]: time="2026-04-13T20:10:21.345338386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.347657 containerd[1472]: time="2026-04-13T20:10:21.345449705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.381250 systemd[1]: Started cri-containerd-573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530.scope - libcontainer container 573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530. Apr 13 20:10:21.391508 systemd-networkd[1382]: calib47cfe966c0: Link UP Apr 13 20:10:21.393757 systemd-networkd[1382]: calib47cfe966c0: Gained carrier Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:20.336 [ERROR][3885] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:20.387 [INFO][3885] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0 calico-kube-controllers-7c7c48779c- calico-system ab7d1268-0475-4d90-b5c2-1c8713e6aafb 936 0 2026-04-13 20:10:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c7c48779c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-193-191 calico-kube-controllers-7c7c48779c-gk7jr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib47cfe966c0 [] [] }} ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Namespace="calico-system" Pod="calico-kube-controllers-7c7c48779c-gk7jr" WorkloadEndpoint="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:20.387 [INFO][3885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Namespace="calico-system" Pod="calico-kube-controllers-7c7c48779c-gk7jr" WorkloadEndpoint="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:20.850 [INFO][4008] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" HandleID="k8s-pod-network.f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:20.891 [INFO][4008] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" HandleID="k8s-pod-network.f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000430440), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-191", "pod":"calico-kube-controllers-7c7c48779c-gk7jr", "timestamp":"2026-04-13 20:10:20.850404976 +0000 UTC"}, Hostname:"172-239-193-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001629a0)} Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:20.892 [INFO][4008] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.180 [INFO][4008] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.180 [INFO][4008] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-191' Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.197 [INFO][4008] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" host="172-239-193-191" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.214 [INFO][4008] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-191" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.237 [INFO][4008] ipam/ipam.go 526: Trying affinity for 192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.245 [INFO][4008] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.277 [INFO][4008] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.278 [INFO][4008] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" host="172-239-193-191" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.292 [INFO][4008] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562 Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.334 [INFO][4008] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" host="172-239-193-191" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.363 [INFO][4008] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.133/26] block=192.168.91.128/26 handle="k8s-pod-network.f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" host="172-239-193-191" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.363 [INFO][4008] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.133/26] handle="k8s-pod-network.f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" host="172-239-193-191" Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.363 [INFO][4008] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:21.453721 containerd[1472]: 2026-04-13 20:10:21.363 [INFO][4008] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.133/26] IPv6=[] ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" HandleID="k8s-pod-network.f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:21.454287 containerd[1472]: 2026-04-13 20:10:21.383 [INFO][3885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Namespace="calico-system" Pod="calico-kube-controllers-7c7c48779c-gk7jr" WorkloadEndpoint="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0", GenerateName:"calico-kube-controllers-7c7c48779c-", Namespace:"calico-system", SelfLink:"", UID:"ab7d1268-0475-4d90-b5c2-1c8713e6aafb", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c7c48779c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"", Pod:"calico-kube-controllers-7c7c48779c-gk7jr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib47cfe966c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.454287 containerd[1472]: 2026-04-13 20:10:21.384 [INFO][3885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.133/32] ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Namespace="calico-system" Pod="calico-kube-controllers-7c7c48779c-gk7jr" WorkloadEndpoint="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:21.454287 containerd[1472]: 2026-04-13 20:10:21.384 [INFO][3885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib47cfe966c0 ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Namespace="calico-system" Pod="calico-kube-controllers-7c7c48779c-gk7jr" WorkloadEndpoint="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:21.454287 containerd[1472]: 2026-04-13 20:10:21.396 [INFO][3885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Namespace="calico-system" Pod="calico-kube-controllers-7c7c48779c-gk7jr" WorkloadEndpoint="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:21.454287 containerd[1472]: 2026-04-13 20:10:21.406 [INFO][3885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Namespace="calico-system" Pod="calico-kube-controllers-7c7c48779c-gk7jr" WorkloadEndpoint="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0", GenerateName:"calico-kube-controllers-7c7c48779c-", Namespace:"calico-system", SelfLink:"", UID:"ab7d1268-0475-4d90-b5c2-1c8713e6aafb", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c7c48779c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562", Pod:"calico-kube-controllers-7c7c48779c-gk7jr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib47cfe966c0", MAC:"32:b8:a2:73:56:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.454287 containerd[1472]: 2026-04-13 20:10:21.437 [INFO][3885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562" Namespace="calico-system" Pod="calico-kube-controllers-7c7c48779c-gk7jr" WorkloadEndpoint="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:21.491789 systemd-networkd[1382]: cali574c7d32be1: Link UP Apr 13 20:10:21.495818 containerd[1472]: time="2026-04-13T20:10:21.493735945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:21.495818 containerd[1472]: time="2026-04-13T20:10:21.493786635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:21.495818 containerd[1472]: time="2026-04-13T20:10:21.493801765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.495818 containerd[1472]: time="2026-04-13T20:10:21.493876895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.492278 systemd-networkd[1382]: cali574c7d32be1: Gained carrier Apr 13 20:10:21.549615 systemd[1]: Started cri-containerd-f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562.scope - libcontainer container f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562. Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.242 [ERROR][4247] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.281 [INFO][4247] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0 whisker-f7c875498- calico-system b6e5d9c6-9877-40bb-9431-db7ead187c0a 961 0 2026-04-13 20:10:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f7c875498 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-193-191 whisker-f7c875498-2d6k7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali574c7d32be1 [] [] }} ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Namespace="calico-system" Pod="whisker-f7c875498-2d6k7" WorkloadEndpoint="172--239--193--191-k8s-whisker--f7c875498--2d6k7-" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.281 [INFO][4247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Namespace="calico-system" Pod="whisker-f7c875498-2d6k7" WorkloadEndpoint="172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.339 [INFO][4292] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" HandleID="k8s-pod-network.525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Workload="172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.364 [INFO][4292] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" HandleID="k8s-pod-network.525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Workload="172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-191", "pod":"whisker-f7c875498-2d6k7", "timestamp":"2026-04-13 20:10:21.339359209 +0000 UTC"}, Hostname:"172-239-193-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000281600)} Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.364 [INFO][4292] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.364 [INFO][4292] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.364 [INFO][4292] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-191' Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.370 [INFO][4292] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" host="172-239-193-191" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.381 [INFO][4292] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-191" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.422 [INFO][4292] ipam/ipam.go 526: Trying affinity for 192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.427 [INFO][4292] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.438 [INFO][4292] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.438 [INFO][4292] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" host="172-239-193-191" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.442 [INFO][4292] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0 Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.451 [INFO][4292] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" host="172-239-193-191" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.477 [INFO][4292] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.134/26] block=192.168.91.128/26 handle="k8s-pod-network.525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" host="172-239-193-191" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.477 [INFO][4292] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.134/26] handle="k8s-pod-network.525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" host="172-239-193-191" Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.478 [INFO][4292] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:21.552537 containerd[1472]: 2026-04-13 20:10:21.478 [INFO][4292] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.134/26] IPv6=[] ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" HandleID="k8s-pod-network.525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Workload="172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0" Apr 13 20:10:21.553179 containerd[1472]: 2026-04-13 20:10:21.483 [INFO][4247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Namespace="calico-system" Pod="whisker-f7c875498-2d6k7" WorkloadEndpoint="172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0", GenerateName:"whisker-f7c875498-", Namespace:"calico-system", SelfLink:"", UID:"b6e5d9c6-9877-40bb-9431-db7ead187c0a", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f7c875498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"", Pod:"whisker-f7c875498-2d6k7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali574c7d32be1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.553179 containerd[1472]: 2026-04-13 20:10:21.483 [INFO][4247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.134/32] ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Namespace="calico-system" Pod="whisker-f7c875498-2d6k7" WorkloadEndpoint="172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0" Apr 13 20:10:21.553179 containerd[1472]: 2026-04-13 20:10:21.483 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali574c7d32be1 ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Namespace="calico-system" Pod="whisker-f7c875498-2d6k7" WorkloadEndpoint="172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0" Apr 13 20:10:21.553179 containerd[1472]: 2026-04-13 20:10:21.492 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Namespace="calico-system" Pod="whisker-f7c875498-2d6k7" WorkloadEndpoint="172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0" Apr 13 20:10:21.553179 containerd[1472]: 2026-04-13 20:10:21.494 [INFO][4247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Namespace="calico-system" Pod="whisker-f7c875498-2d6k7" WorkloadEndpoint="172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0", GenerateName:"whisker-f7c875498-", Namespace:"calico-system", SelfLink:"", UID:"b6e5d9c6-9877-40bb-9431-db7ead187c0a", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f7c875498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0", Pod:"whisker-f7c875498-2d6k7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali574c7d32be1", MAC:"a2:a7:21:0b:3b:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.553179 containerd[1472]: 2026-04-13 20:10:21.550 [INFO][4247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0" Namespace="calico-system" Pod="whisker-f7c875498-2d6k7" WorkloadEndpoint="172--239--193--191-k8s-whisker--f7c875498--2d6k7-eth0" Apr 13 20:10:21.580049 containerd[1472]: time="2026-04-13T20:10:21.579041648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:21.580049 containerd[1472]: time="2026-04-13T20:10:21.579103577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:21.580049 containerd[1472]: time="2026-04-13T20:10:21.579128587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.580049 containerd[1472]: time="2026-04-13T20:10:21.579218407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.654721 systemd-networkd[1382]: cali5ccc0a88cb0: Link UP Apr 13 20:10:21.662158 systemd-networkd[1382]: cali5ccc0a88cb0: Gained carrier Apr 13 20:10:21.673145 systemd[1]: Started cri-containerd-525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0.scope - libcontainer container 525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0. Apr 13 20:10:21.691972 containerd[1472]: time="2026-04-13T20:10:21.691934894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-dvmjq,Uid:3d4eb2d5-db0e-4d66-8113-637f0e2427c6,Namespace:calico-system,Attempt:1,} returns sandbox id \"5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1\"" Apr 13 20:10:21.704161 containerd[1472]: time="2026-04-13T20:10:21.702652042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:10:21.706712 kubelet[2578]: E0413 20:10:21.706686 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:20.572 [ERROR][3955] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:20.699 [INFO][3955] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0 calico-apiserver-6df68c9d4f- calico-system 46975966-bb29-4145-9c1a-fe60aed66e16 934 0 2026-04-13 20:10:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6df68c9d4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-193-191 calico-apiserver-6df68c9d4f-lx9f8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5ccc0a88cb0 [] [] }} ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-lx9f8" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:20.704 [INFO][3955] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-lx9f8" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:20.887 [INFO][4123] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" HandleID="k8s-pod-network.661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:20.899 [INFO][4123] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" HandleID="k8s-pod-network.661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043ba70), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-191", "pod":"calico-apiserver-6df68c9d4f-lx9f8", "timestamp":"2026-04-13 20:10:20.887570174 +0000 UTC"}, Hostname:"172-239-193-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000434420)} Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:20.899 [INFO][4123] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.477 [INFO][4123] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.477 [INFO][4123] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-191' Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.497 [INFO][4123] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" host="172-239-193-191" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.545 [INFO][4123] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-191" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.566 [INFO][4123] ipam/ipam.go 526: Trying affinity for 192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.568 [INFO][4123] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.574 [INFO][4123] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.575 [INFO][4123] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" host="172-239-193-191" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.577 [INFO][4123] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651 Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.584 [INFO][4123] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" host="172-239-193-191" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.600 [INFO][4123] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.135/26] block=192.168.91.128/26 handle="k8s-pod-network.661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" host="172-239-193-191" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.600 [INFO][4123] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.135/26] handle="k8s-pod-network.661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" host="172-239-193-191" Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.601 [INFO][4123] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:21.713068 containerd[1472]: 2026-04-13 20:10:21.601 [INFO][4123] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.135/26] IPv6=[] ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" HandleID="k8s-pod-network.661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:21.713584 containerd[1472]: 2026-04-13 20:10:21.623 [INFO][3955] cni-plugin/k8s.go 418: Populated endpoint ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-lx9f8" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0", GenerateName:"calico-apiserver-6df68c9d4f-", Namespace:"calico-system", SelfLink:"", UID:"46975966-bb29-4145-9c1a-fe60aed66e16", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df68c9d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"", Pod:"calico-apiserver-6df68c9d4f-lx9f8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5ccc0a88cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.713584 containerd[1472]: 2026-04-13 20:10:21.624 [INFO][3955] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.135/32] ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-lx9f8" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:21.713584 containerd[1472]: 2026-04-13 20:10:21.624 [INFO][3955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ccc0a88cb0 ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-lx9f8" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:21.713584 containerd[1472]: 2026-04-13 20:10:21.670 [INFO][3955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-lx9f8" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:21.713584 containerd[1472]: 2026-04-13 20:10:21.674 [INFO][3955] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-lx9f8" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0", GenerateName:"calico-apiserver-6df68c9d4f-", Namespace:"calico-system", SelfLink:"", UID:"46975966-bb29-4145-9c1a-fe60aed66e16", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df68c9d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651", Pod:"calico-apiserver-6df68c9d4f-lx9f8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5ccc0a88cb0", MAC:"6a:52:64:54:58:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.713584 containerd[1472]: 2026-04-13 20:10:21.694 [INFO][3955] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651" Namespace="calico-system" Pod="calico-apiserver-6df68c9d4f-lx9f8" WorkloadEndpoint="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:21.721696 kubelet[2578]: I0413 20:10:21.720538 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-glg4w" podStartSLOduration=24.720524234 podStartE2EDuration="24.720524234s" podCreationTimestamp="2026-04-13 20:09:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:10:21.719887236 +0000 UTC m=+31.344422028" watchObservedRunningTime="2026-04-13 20:10:21.720524234 +0000 UTC m=+31.345059036" Apr 13 20:10:21.745156 containerd[1472]: time="2026-04-13T20:10:21.744466502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df68c9d4f-jpz9w,Uid:41a53f62-b7f4-40f3-882b-8cc9702c76d5,Namespace:calico-system,Attempt:1,} returns sandbox id \"7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3\"" Apr 13 20:10:21.749629 systemd-networkd[1382]: cali68fedc07b6e: Link UP Apr 13 20:10:21.752020 systemd-networkd[1382]: cali68fedc07b6e: Gained carrier Apr 13 20:10:21.775583 containerd[1472]: time="2026-04-13T20:10:21.774848595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:21.775583 containerd[1472]: time="2026-04-13T20:10:21.774897935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:21.775583 containerd[1472]: time="2026-04-13T20:10:21.774911275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.775583 containerd[1472]: time="2026-04-13T20:10:21.774981414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:20.423 [ERROR][3907] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:20.536 [INFO][3907] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0 coredns-66bc5c9577- kube-system 5778e305-4fb3-40cf-9eb5-2894d58c2771 933 0 2026-04-13 20:09:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-193-191 coredns-66bc5c9577-vdbdt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68fedc07b6e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Namespace="kube-system" Pod="coredns-66bc5c9577-vdbdt" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:20.536 [INFO][3907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Namespace="kube-system" Pod="coredns-66bc5c9577-vdbdt" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:20.919 [INFO][4074] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" HandleID="k8s-pod-network.e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:20.937 [INFO][4074] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" HandleID="k8s-pod-network.e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e110), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-193-191", "pod":"coredns-66bc5c9577-vdbdt", "timestamp":"2026-04-13 20:10:20.919744442 +0000 UTC"}, Hostname:"172-239-193-191", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:20.938 [INFO][4074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.600 [INFO][4074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.601 [INFO][4074] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-191' Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.610 [INFO][4074] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" host="172-239-193-191" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.641 [INFO][4074] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-191" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.674 [INFO][4074] ipam/ipam.go 526: Trying affinity for 192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.683 [INFO][4074] ipam/ipam.go 160: Attempting to load block cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.692 [INFO][4074] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.91.128/26 host="172-239-193-191" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.692 [INFO][4074] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.91.128/26 handle="k8s-pod-network.e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" host="172-239-193-191" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.695 [INFO][4074] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.707 [INFO][4074] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.91.128/26 handle="k8s-pod-network.e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" host="172-239-193-191" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.730 [INFO][4074] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.91.136/26] block=192.168.91.128/26 handle="k8s-pod-network.e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" host="172-239-193-191" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.730 [INFO][4074] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.91.136/26] handle="k8s-pod-network.e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" host="172-239-193-191" Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.730 [INFO][4074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:21.803454 containerd[1472]: 2026-04-13 20:10:21.730 [INFO][4074] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.91.136/26] IPv6=[] ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" HandleID="k8s-pod-network.e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:21.805760 containerd[1472]: 2026-04-13 20:10:21.737 [INFO][3907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Namespace="kube-system" Pod="coredns-66bc5c9577-vdbdt" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5778e305-4fb3-40cf-9eb5-2894d58c2771", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"", Pod:"coredns-66bc5c9577-vdbdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68fedc07b6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.805760 containerd[1472]: 2026-04-13 20:10:21.737 [INFO][3907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.136/32] ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Namespace="kube-system" Pod="coredns-66bc5c9577-vdbdt" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:21.805760 containerd[1472]: 2026-04-13 20:10:21.737 [INFO][3907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68fedc07b6e ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Namespace="kube-system" Pod="coredns-66bc5c9577-vdbdt" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:21.805760 containerd[1472]: 2026-04-13 20:10:21.755 [INFO][3907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Namespace="kube-system" Pod="coredns-66bc5c9577-vdbdt" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:21.805760 containerd[1472]: 2026-04-13 20:10:21.761 [INFO][3907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Namespace="kube-system" Pod="coredns-66bc5c9577-vdbdt" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5778e305-4fb3-40cf-9eb5-2894d58c2771", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb", Pod:"coredns-66bc5c9577-vdbdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68fedc07b6e", MAC:"fa:5f:f8:52:20:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:21.805760 containerd[1472]: 2026-04-13 20:10:21.783 [INFO][3907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb" Namespace="kube-system" Pod="coredns-66bc5c9577-vdbdt" WorkloadEndpoint="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:21.807989 systemd[1]: Started cri-containerd-661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651.scope - libcontainer container 661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651. Apr 13 20:10:21.856509 systemd-networkd[1382]: cali5c1ba5c7073: Gained IPv6LL Apr 13 20:10:21.871444 containerd[1472]: time="2026-04-13T20:10:21.870839726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:21.871444 containerd[1472]: time="2026-04-13T20:10:21.871098925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:21.872358 containerd[1472]: time="2026-04-13T20:10:21.871887002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.874289 containerd[1472]: time="2026-04-13T20:10:21.873954774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:21.888971 containerd[1472]: time="2026-04-13T20:10:21.888596288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mzgf,Uid:b083f9b4-7da6-4a64-b37b-aa5d508c2e7f,Namespace:calico-system,Attempt:1,} returns sandbox id \"573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530\"" Apr 13 20:10:21.914623 systemd[1]: Started cri-containerd-e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb.scope - libcontainer container e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb. Apr 13 20:10:21.976239 containerd[1472]: time="2026-04-13T20:10:21.976166871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7c875498-2d6k7,Uid:b6e5d9c6-9877-40bb-9431-db7ead187c0a,Namespace:calico-system,Attempt:0,} returns sandbox id \"525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0\"" Apr 13 20:10:21.996181 containerd[1472]: time="2026-04-13T20:10:21.995934235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-vdbdt,Uid:5778e305-4fb3-40cf-9eb5-2894d58c2771,Namespace:kube-system,Attempt:1,} returns sandbox id \"e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb\"" Apr 13 20:10:21.997003 kubelet[2578]: E0413 20:10:21.996694 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:22.006451 containerd[1472]: time="2026-04-13T20:10:22.006249677Z" level=info msg="CreateContainer within sandbox \"e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:10:22.021717 containerd[1472]: time="2026-04-13T20:10:22.021685541Z" level=info msg="CreateContainer within sandbox \"e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"457452a005361521e84d7bb6c9f6ecdcecb321b5b4456870a85b235561df054f\"" Apr 13 20:10:22.023868 containerd[1472]: time="2026-04-13T20:10:22.023247596Z" level=info msg="StartContainer for \"457452a005361521e84d7bb6c9f6ecdcecb321b5b4456870a85b235561df054f\"" Apr 13 20:10:22.046708 systemd-networkd[1382]: cali4e6b0479f2b: Gained IPv6LL Apr 13 20:10:22.050838 containerd[1472]: time="2026-04-13T20:10:22.050786937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7c48779c-gk7jr,Uid:ab7d1268-0475-4d90-b5c2-1c8713e6aafb,Namespace:calico-system,Attempt:1,} returns sandbox id \"f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562\"" Apr 13 20:10:22.064599 systemd[1]: Started cri-containerd-457452a005361521e84d7bb6c9f6ecdcecb321b5b4456870a85b235561df054f.scope - libcontainer container 457452a005361521e84d7bb6c9f6ecdcecb321b5b4456870a85b235561df054f. Apr 13 20:10:22.106414 containerd[1472]: time="2026-04-13T20:10:22.106371046Z" level=info msg="StartContainer for \"457452a005361521e84d7bb6c9f6ecdcecb321b5b4456870a85b235561df054f\" returns successfully" Apr 13 20:10:22.141613 containerd[1472]: time="2026-04-13T20:10:22.141574309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6df68c9d4f-lx9f8,Uid:46975966-bb29-4145-9c1a-fe60aed66e16,Namespace:calico-system,Attempt:1,} returns sandbox id \"661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651\"" Apr 13 20:10:22.239542 kernel: calico-node[3993]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:10:22.481784 kubelet[2578]: I0413 20:10:22.481743 2578 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92586f17-ea3a-4af3-aa4e-c720d02f8e41" path="/var/lib/kubelet/pods/92586f17-ea3a-4af3-aa4e-c720d02f8e41/volumes" Apr 13 20:10:22.717737 kubelet[2578]: E0413 20:10:22.717670 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:22.734914 kubelet[2578]: I0413 20:10:22.734293 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-vdbdt" podStartSLOduration=24.734280114 podStartE2EDuration="24.734280114s" podCreationTimestamp="2026-04-13 20:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:10:22.731661873 +0000 UTC m=+32.356196695" watchObservedRunningTime="2026-04-13 20:10:22.734280114 +0000 UTC m=+32.358814906" Apr 13 20:10:22.743838 kubelet[2578]: E0413 20:10:22.743390 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:22.750195 systemd-networkd[1382]: calie8206b42745: Gained IPv6LL Apr 13 20:10:22.941076 systemd-networkd[1382]: cali4c18fb6d362: Gained IPv6LL Apr 13 20:10:22.961606 systemd-networkd[1382]: vxlan.calico: Link UP Apr 13 20:10:22.961614 systemd-networkd[1382]: vxlan.calico: Gained carrier Apr 13 20:10:23.069810 systemd-networkd[1382]: cali68fedc07b6e: Gained IPv6LL Apr 13 20:10:23.134747 systemd-networkd[1382]: cali574c7d32be1: Gained IPv6LL Apr 13 20:10:23.196682 systemd-networkd[1382]: calib47cfe966c0: Gained IPv6LL Apr 13 20:10:23.644757 systemd-networkd[1382]: cali5ccc0a88cb0: Gained IPv6LL Apr 13 20:10:23.746724 kubelet[2578]: E0413 20:10:23.746264 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:23.746724 kubelet[2578]: E0413 20:10:23.746351 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:23.813787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3920553786.mount: Deactivated successfully. Apr 13 20:10:24.152065 containerd[1472]: time="2026-04-13T20:10:24.152027457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:24.153169 containerd[1472]: time="2026-04-13T20:10:24.153124723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:10:24.153661 containerd[1472]: time="2026-04-13T20:10:24.153635472Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:24.156449 containerd[1472]: time="2026-04-13T20:10:24.155771425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:24.156804 containerd[1472]: time="2026-04-13T20:10:24.156779512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.453217143s" Apr 13 20:10:24.156879 containerd[1472]: time="2026-04-13T20:10:24.156863191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:10:24.157767 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Apr 13 20:10:24.159320 containerd[1472]: time="2026-04-13T20:10:24.159281974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:10:24.164515 containerd[1472]: time="2026-04-13T20:10:24.163294111Z" level=info msg="CreateContainer within sandbox \"5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:10:24.181472 containerd[1472]: time="2026-04-13T20:10:24.180302117Z" level=info msg="CreateContainer within sandbox \"5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c\"" Apr 13 20:10:24.181691 containerd[1472]: time="2026-04-13T20:10:24.181672193Z" level=info msg="StartContainer for \"3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c\"" Apr 13 20:10:24.185636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681376331.mount: Deactivated successfully. Apr 13 20:10:24.240564 systemd[1]: Started cri-containerd-3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c.scope - libcontainer container 3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c. Apr 13 20:10:24.285227 containerd[1472]: time="2026-04-13T20:10:24.285192245Z" level=info msg="StartContainer for \"3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c\" returns successfully" Apr 13 20:10:24.752736 kubelet[2578]: E0413 20:10:24.752295 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:10:24.813879 systemd[1]: run-containerd-runc-k8s.io-3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c-runc.Qap90y.mount: Deactivated successfully. Apr 13 20:10:24.990502 kubelet[2578]: I0413 20:10:24.988767 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:25.116086 kubelet[2578]: I0413 20:10:25.115663 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-dvmjq" podStartSLOduration=15.656978115 podStartE2EDuration="18.115649138s" podCreationTimestamp="2026-04-13 20:10:07 +0000 UTC" firstStartedPulling="2026-04-13 20:10:21.699546254 +0000 UTC m=+31.324081046" lastFinishedPulling="2026-04-13 20:10:24.158217277 +0000 UTC m=+33.782752069" observedRunningTime="2026-04-13 20:10:24.77001367 +0000 UTC m=+34.394548472" watchObservedRunningTime="2026-04-13 20:10:25.115649138 +0000 UTC m=+34.740183940" Apr 13 20:10:25.145071 systemd[1]: run-containerd-runc-k8s.io-f32a3687dfe2845ae4d3ef400d0cc5bc47532248408a9102a23197c5359b17d2-runc.of1fOT.mount: Deactivated successfully. Apr 13 20:10:25.755200 kubelet[2578]: I0413 20:10:25.755174 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:25.791496 containerd[1472]: time="2026-04-13T20:10:25.790360765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:25.791820 containerd[1472]: time="2026-04-13T20:10:25.791640301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:10:25.792669 containerd[1472]: time="2026-04-13T20:10:25.792641318Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:25.795322 containerd[1472]: time="2026-04-13T20:10:25.795297530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:25.796803 containerd[1472]: time="2026-04-13T20:10:25.796357787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.637052693s" Apr 13 20:10:25.796803 containerd[1472]: time="2026-04-13T20:10:25.796383497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:10:25.801437 containerd[1472]: time="2026-04-13T20:10:25.798734670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:10:25.803274 containerd[1472]: time="2026-04-13T20:10:25.803234177Z" level=info msg="CreateContainer within sandbox \"7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:10:25.815524 containerd[1472]: time="2026-04-13T20:10:25.815433360Z" level=info msg="CreateContainer within sandbox \"7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0935765472b6fda68d79978882f35b63a3ed4853d9a230cb51318b53f828dc20\"" Apr 13 20:10:25.819744 containerd[1472]: time="2026-04-13T20:10:25.819640358Z" level=info msg="StartContainer for \"0935765472b6fda68d79978882f35b63a3ed4853d9a230cb51318b53f828dc20\"" Apr 13 20:10:25.864566 systemd[1]: Started cri-containerd-0935765472b6fda68d79978882f35b63a3ed4853d9a230cb51318b53f828dc20.scope - libcontainer container 0935765472b6fda68d79978882f35b63a3ed4853d9a230cb51318b53f828dc20. Apr 13 20:10:25.917385 containerd[1472]: time="2026-04-13T20:10:25.917344688Z" level=info msg="StartContainer for \"0935765472b6fda68d79978882f35b63a3ed4853d9a230cb51318b53f828dc20\" returns successfully" Apr 13 20:10:26.887967 containerd[1472]: time="2026-04-13T20:10:26.887913941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:26.890441 containerd[1472]: time="2026-04-13T20:10:26.889279787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:10:26.890441 containerd[1472]: time="2026-04-13T20:10:26.889896145Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:26.892581 containerd[1472]: time="2026-04-13T20:10:26.892552578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:26.894054 containerd[1472]: time="2026-04-13T20:10:26.894010704Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.095251994s" Apr 13 20:10:26.894054 containerd[1472]: time="2026-04-13T20:10:26.894044334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:10:26.896505 containerd[1472]: time="2026-04-13T20:10:26.896473087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:10:26.899306 containerd[1472]: time="2026-04-13T20:10:26.899278109Z" level=info msg="CreateContainer within sandbox \"573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:10:26.927447 containerd[1472]: time="2026-04-13T20:10:26.924674619Z" level=info msg="CreateContainer within sandbox \"573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b0019011dca11db4d093b265071af08e581d55250d474d00596820034d748359\"" Apr 13 20:10:26.927447 containerd[1472]: time="2026-04-13T20:10:26.926302604Z" level=info msg="StartContainer for \"b0019011dca11db4d093b265071af08e581d55250d474d00596820034d748359\"" Apr 13 20:10:26.927798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2240006400.mount: Deactivated successfully. Apr 13 20:10:26.967785 systemd[1]: run-containerd-runc-k8s.io-b0019011dca11db4d093b265071af08e581d55250d474d00596820034d748359-runc.3P9SFZ.mount: Deactivated successfully. Apr 13 20:10:26.978543 systemd[1]: Started cri-containerd-b0019011dca11db4d093b265071af08e581d55250d474d00596820034d748359.scope - libcontainer container b0019011dca11db4d093b265071af08e581d55250d474d00596820034d748359. Apr 13 20:10:27.012133 containerd[1472]: time="2026-04-13T20:10:27.012099497Z" level=info msg="StartContainer for \"b0019011dca11db4d093b265071af08e581d55250d474d00596820034d748359\" returns successfully" Apr 13 20:10:27.625472 containerd[1472]: time="2026-04-13T20:10:27.625402247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:27.626272 containerd[1472]: time="2026-04-13T20:10:27.626241855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:10:27.626839 containerd[1472]: time="2026-04-13T20:10:27.626799144Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:27.628558 containerd[1472]: time="2026-04-13T20:10:27.628520329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:27.629510 containerd[1472]: time="2026-04-13T20:10:27.629393927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 732.8881ms" Apr 13 20:10:27.629510 containerd[1472]: time="2026-04-13T20:10:27.629436087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:10:27.630975 containerd[1472]: time="2026-04-13T20:10:27.630955563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:10:27.633003 containerd[1472]: time="2026-04-13T20:10:27.632947197Z" level=info msg="CreateContainer within sandbox \"525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:10:27.646688 containerd[1472]: time="2026-04-13T20:10:27.646662322Z" level=info msg="CreateContainer within sandbox \"525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"088e8b43bad254585080e7d04ed1078757b21ec3dc362fc84c303fe670422a04\"" Apr 13 20:10:27.647403 containerd[1472]: time="2026-04-13T20:10:27.647320140Z" level=info msg="StartContainer for \"088e8b43bad254585080e7d04ed1078757b21ec3dc362fc84c303fe670422a04\"" Apr 13 20:10:27.677580 systemd[1]: Started cri-containerd-088e8b43bad254585080e7d04ed1078757b21ec3dc362fc84c303fe670422a04.scope - libcontainer container 088e8b43bad254585080e7d04ed1078757b21ec3dc362fc84c303fe670422a04. Apr 13 20:10:27.736920 containerd[1472]: time="2026-04-13T20:10:27.736875056Z" level=info msg="StartContainer for \"088e8b43bad254585080e7d04ed1078757b21ec3dc362fc84c303fe670422a04\" returns successfully" Apr 13 20:10:27.768204 kubelet[2578]: I0413 20:10:27.767786 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:30.740183 containerd[1472]: time="2026-04-13T20:10:30.740132509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:30.742517 containerd[1472]: time="2026-04-13T20:10:30.742489313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:10:30.742840 containerd[1472]: time="2026-04-13T20:10:30.742818123Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:30.747494 containerd[1472]: time="2026-04-13T20:10:30.747471293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:30.748902 containerd[1472]: time="2026-04-13T20:10:30.748874090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.117827728s" Apr 13 20:10:30.749078 containerd[1472]: time="2026-04-13T20:10:30.749060579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:10:30.751906 containerd[1472]: time="2026-04-13T20:10:30.751882553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:10:30.776943 containerd[1472]: time="2026-04-13T20:10:30.776904539Z" level=info msg="CreateContainer within sandbox \"f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:10:30.801547 containerd[1472]: time="2026-04-13T20:10:30.801374487Z" level=info msg="CreateContainer within sandbox \"f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d057a55295c363bfb72035b28c861c193da4fd66947d28e438cf5d0443c29d05\"" Apr 13 20:10:30.805933 containerd[1472]: time="2026-04-13T20:10:30.804526690Z" level=info msg="StartContainer for \"d057a55295c363bfb72035b28c861c193da4fd66947d28e438cf5d0443c29d05\"" Apr 13 20:10:30.854578 systemd[1]: Started cri-containerd-d057a55295c363bfb72035b28c861c193da4fd66947d28e438cf5d0443c29d05.scope - libcontainer container d057a55295c363bfb72035b28c861c193da4fd66947d28e438cf5d0443c29d05. Apr 13 20:10:30.932812 containerd[1472]: time="2026-04-13T20:10:30.932726074Z" level=info msg="StartContainer for \"d057a55295c363bfb72035b28c861c193da4fd66947d28e438cf5d0443c29d05\" returns successfully" Apr 13 20:10:30.964212 containerd[1472]: time="2026-04-13T20:10:30.964156727Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:30.964857 containerd[1472]: time="2026-04-13T20:10:30.964828815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 20:10:30.967572 containerd[1472]: time="2026-04-13T20:10:30.967550230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 215.484027ms" Apr 13 20:10:30.967668 containerd[1472]: time="2026-04-13T20:10:30.967652239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:10:30.969474 containerd[1472]: time="2026-04-13T20:10:30.969376946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:10:30.975375 containerd[1472]: time="2026-04-13T20:10:30.975354913Z" level=info msg="CreateContainer within sandbox \"661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:10:30.986875 containerd[1472]: time="2026-04-13T20:10:30.986851478Z" level=info msg="CreateContainer within sandbox \"661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"22a0173efa04eb3bfe2b8c15c4b3cd6c7922371b080c4d03c041642aba870408\"" Apr 13 20:10:30.994808 containerd[1472]: time="2026-04-13T20:10:30.994504192Z" level=info msg="StartContainer for \"22a0173efa04eb3bfe2b8c15c4b3cd6c7922371b080c4d03c041642aba870408\"" Apr 13 20:10:31.041077 systemd[1]: Started cri-containerd-22a0173efa04eb3bfe2b8c15c4b3cd6c7922371b080c4d03c041642aba870408.scope - libcontainer container 22a0173efa04eb3bfe2b8c15c4b3cd6c7922371b080c4d03c041642aba870408. Apr 13 20:10:31.100378 containerd[1472]: time="2026-04-13T20:10:31.099652789Z" level=info msg="StartContainer for \"22a0173efa04eb3bfe2b8c15c4b3cd6c7922371b080c4d03c041642aba870408\" returns successfully" Apr 13 20:10:31.806321 kubelet[2578]: I0413 20:10:31.806252 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6df68c9d4f-jpz9w" podStartSLOduration=20.759712207 podStartE2EDuration="24.806238355s" podCreationTimestamp="2026-04-13 20:10:07 +0000 UTC" firstStartedPulling="2026-04-13 20:10:21.751349365 +0000 UTC m=+31.375884157" lastFinishedPulling="2026-04-13 20:10:25.797875503 +0000 UTC m=+35.422410305" observedRunningTime="2026-04-13 20:10:26.772506712 +0000 UTC m=+36.397041504" watchObservedRunningTime="2026-04-13 20:10:31.806238355 +0000 UTC m=+41.430773147" Apr 13 20:10:31.823498 kubelet[2578]: I0413 20:10:31.822104 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6df68c9d4f-lx9f8" podStartSLOduration=15.997492376 podStartE2EDuration="24.822087653s" podCreationTimestamp="2026-04-13 20:10:07 +0000 UTC" firstStartedPulling="2026-04-13 20:10:22.1440602 +0000 UTC m=+31.768594992" lastFinishedPulling="2026-04-13 20:10:30.968655477 +0000 UTC m=+40.593190269" observedRunningTime="2026-04-13 20:10:31.819202549 +0000 UTC m=+41.443737341" watchObservedRunningTime="2026-04-13 20:10:31.822087653 +0000 UTC m=+41.446622445" Apr 13 20:10:31.823498 kubelet[2578]: I0413 20:10:31.822370 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c7c48779c-gk7jr" podStartSLOduration=15.128494381 podStartE2EDuration="23.822366512s" podCreationTimestamp="2026-04-13 20:10:08 +0000 UTC" firstStartedPulling="2026-04-13 20:10:22.056670625 +0000 UTC m=+31.681205417" lastFinishedPulling="2026-04-13 20:10:30.750542756 +0000 UTC m=+40.375077548" observedRunningTime="2026-04-13 20:10:31.807439092 +0000 UTC m=+41.431973884" watchObservedRunningTime="2026-04-13 20:10:31.822366512 +0000 UTC m=+41.446901304" Apr 13 20:10:31.975009 containerd[1472]: time="2026-04-13T20:10:31.974953215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:31.977096 containerd[1472]: time="2026-04-13T20:10:31.975982643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:10:31.977096 containerd[1472]: time="2026-04-13T20:10:31.976538001Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:31.980244 containerd[1472]: time="2026-04-13T20:10:31.980096134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:31.980879 containerd[1472]: time="2026-04-13T20:10:31.980845113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.011442687s" Apr 13 20:10:31.980926 containerd[1472]: time="2026-04-13T20:10:31.980880413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:10:31.983618 containerd[1472]: time="2026-04-13T20:10:31.983579117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:10:31.987224 containerd[1472]: time="2026-04-13T20:10:31.987164330Z" level=info msg="CreateContainer within sandbox \"573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:10:32.018636 containerd[1472]: time="2026-04-13T20:10:32.018349829Z" level=info msg="CreateContainer within sandbox \"573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e10c76418312a788390dcfa2ad12c8446545b8fd69f319c8fcca5678f64a2ce2\"" Apr 13 20:10:32.020384 containerd[1472]: time="2026-04-13T20:10:32.020352136Z" level=info msg="StartContainer for \"e10c76418312a788390dcfa2ad12c8446545b8fd69f319c8fcca5678f64a2ce2\"" Apr 13 20:10:32.084547 systemd[1]: Started cri-containerd-e10c76418312a788390dcfa2ad12c8446545b8fd69f319c8fcca5678f64a2ce2.scope - libcontainer container e10c76418312a788390dcfa2ad12c8446545b8fd69f319c8fcca5678f64a2ce2. Apr 13 20:10:32.131552 containerd[1472]: time="2026-04-13T20:10:32.131478846Z" level=info msg="StartContainer for \"e10c76418312a788390dcfa2ad12c8446545b8fd69f319c8fcca5678f64a2ce2\" returns successfully" Apr 13 20:10:32.568740 kubelet[2578]: I0413 20:10:32.568700 2578 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:10:32.570392 kubelet[2578]: I0413 20:10:32.570242 2578 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:10:32.807586 kubelet[2578]: I0413 20:10:32.807564 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:32.809466 kubelet[2578]: I0413 20:10:32.808218 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:33.031314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575703242.mount: Deactivated successfully. Apr 13 20:10:33.042368 containerd[1472]: time="2026-04-13T20:10:33.042321569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:33.043251 containerd[1472]: time="2026-04-13T20:10:33.043217228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:10:33.043764 containerd[1472]: time="2026-04-13T20:10:33.043714477Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:33.045595 containerd[1472]: time="2026-04-13T20:10:33.045565574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:33.046636 containerd[1472]: time="2026-04-13T20:10:33.046528712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.062915555s" Apr 13 20:10:33.046636 containerd[1472]: time="2026-04-13T20:10:33.046557242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:10:33.051071 containerd[1472]: time="2026-04-13T20:10:33.050966324Z" level=info msg="CreateContainer within sandbox \"525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:10:33.065907 containerd[1472]: time="2026-04-13T20:10:33.065882117Z" level=info msg="CreateContainer within sandbox \"525cafffb45824d81ab92fd43547f2d951a12779c41a7abfc9a707275ddbf5e0\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9f560e82feeb3df97fa50cf5b67ab13ec2a9395511fa880db1dbe51ed21015c5\"" Apr 13 20:10:33.067307 containerd[1472]: time="2026-04-13T20:10:33.066408516Z" level=info msg="StartContainer for \"9f560e82feeb3df97fa50cf5b67ab13ec2a9395511fa880db1dbe51ed21015c5\"" Apr 13 20:10:33.105574 systemd[1]: Started cri-containerd-9f560e82feeb3df97fa50cf5b67ab13ec2a9395511fa880db1dbe51ed21015c5.scope - libcontainer container 9f560e82feeb3df97fa50cf5b67ab13ec2a9395511fa880db1dbe51ed21015c5. Apr 13 20:10:33.149341 containerd[1472]: time="2026-04-13T20:10:33.148642671Z" level=info msg="StartContainer for \"9f560e82feeb3df97fa50cf5b67ab13ec2a9395511fa880db1dbe51ed21015c5\" returns successfully" Apr 13 20:10:33.825396 kubelet[2578]: I0413 20:10:33.824509 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4mzgf" podStartSLOduration=15.733487061 podStartE2EDuration="25.824490694s" podCreationTimestamp="2026-04-13 20:10:08 +0000 UTC" firstStartedPulling="2026-04-13 20:10:21.891269517 +0000 UTC m=+31.515804309" lastFinishedPulling="2026-04-13 20:10:31.98227315 +0000 UTC m=+41.606807942" observedRunningTime="2026-04-13 20:10:32.821417342 +0000 UTC m=+42.445952134" watchObservedRunningTime="2026-04-13 20:10:33.824490694 +0000 UTC m=+43.449025486" Apr 13 20:10:39.730998 kubelet[2578]: I0413 20:10:39.730637 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:39.815285 kubelet[2578]: I0413 20:10:39.812844 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f7c875498-2d6k7" podStartSLOduration=8.745033292 podStartE2EDuration="19.812828894s" podCreationTimestamp="2026-04-13 20:10:20 +0000 UTC" firstStartedPulling="2026-04-13 20:10:21.979689868 +0000 UTC m=+31.604224660" lastFinishedPulling="2026-04-13 20:10:33.04748546 +0000 UTC m=+42.672020262" observedRunningTime="2026-04-13 20:10:33.82632625 +0000 UTC m=+43.450861042" watchObservedRunningTime="2026-04-13 20:10:39.812828894 +0000 UTC m=+49.437363686" Apr 13 20:10:39.831595 systemd[1]: run-containerd-runc-k8s.io-3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c-runc.cydhDb.mount: Deactivated successfully. Apr 13 20:10:42.582595 kubelet[2578]: I0413 20:10:42.582358 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:48.420493 kubelet[2578]: I0413 20:10:48.420046 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:50.468599 containerd[1472]: time="2026-04-13T20:10:50.468561310Z" level=info msg="StopPodSandbox for \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\"" Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.507 [WARNING][5266] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0", GenerateName:"calico-apiserver-6df68c9d4f-", Namespace:"calico-system", SelfLink:"", UID:"46975966-bb29-4145-9c1a-fe60aed66e16", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df68c9d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651", Pod:"calico-apiserver-6df68c9d4f-lx9f8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5ccc0a88cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.507 [INFO][5266] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.507 [INFO][5266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" iface="eth0" netns="" Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.507 [INFO][5266] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.507 [INFO][5266] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.533 [INFO][5275] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" HandleID="k8s-pod-network.1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.533 [INFO][5275] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.533 [INFO][5275] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.540 [WARNING][5275] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" HandleID="k8s-pod-network.1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.540 [INFO][5275] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" HandleID="k8s-pod-network.1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.542 [INFO][5275] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:50.547924 containerd[1472]: 2026-04-13 20:10:50.545 [INFO][5266] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:50.548326 containerd[1472]: time="2026-04-13T20:10:50.547943463Z" level=info msg="TearDown network for sandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\" successfully" Apr 13 20:10:50.548326 containerd[1472]: time="2026-04-13T20:10:50.547971083Z" level=info msg="StopPodSandbox for \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\" returns successfully" Apr 13 20:10:50.548839 containerd[1472]: time="2026-04-13T20:10:50.548574373Z" level=info msg="RemovePodSandbox for \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\"" Apr 13 20:10:50.548839 containerd[1472]: time="2026-04-13T20:10:50.548600143Z" level=info msg="Forcibly stopping sandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\"" Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.591 [WARNING][5289] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0", GenerateName:"calico-apiserver-6df68c9d4f-", Namespace:"calico-system", SelfLink:"", UID:"46975966-bb29-4145-9c1a-fe60aed66e16", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df68c9d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"661c969c5e46146b547170f1473bd941e5d1ddd78310c9813b1fc25037aa5651", Pod:"calico-apiserver-6df68c9d4f-lx9f8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5ccc0a88cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.592 [INFO][5289] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.592 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" iface="eth0" netns="" Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.592 [INFO][5289] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.592 [INFO][5289] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.618 [INFO][5296] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" HandleID="k8s-pod-network.1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.618 [INFO][5296] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.618 [INFO][5296] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.624 [WARNING][5296] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" HandleID="k8s-pod-network.1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.624 [INFO][5296] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" HandleID="k8s-pod-network.1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--lx9f8-eth0" Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.627 [INFO][5296] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:50.631915 containerd[1472]: 2026-04-13 20:10:50.629 [INFO][5289] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2" Apr 13 20:10:50.632384 containerd[1472]: time="2026-04-13T20:10:50.632352523Z" level=info msg="TearDown network for sandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\" successfully" Apr 13 20:10:50.637403 containerd[1472]: time="2026-04-13T20:10:50.637187500Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:50.637509 containerd[1472]: time="2026-04-13T20:10:50.637412690Z" level=info msg="RemovePodSandbox \"1784bec61c1e89a71f81396acdb8edcf7068d88f7226755a59567a0364155ef2\" returns successfully" Apr 13 20:10:50.638118 containerd[1472]: time="2026-04-13T20:10:50.637857830Z" level=info msg="StopPodSandbox for \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\"" Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.674 [WARNING][5310] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0", GenerateName:"calico-kube-controllers-7c7c48779c-", Namespace:"calico-system", SelfLink:"", UID:"ab7d1268-0475-4d90-b5c2-1c8713e6aafb", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c7c48779c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562", Pod:"calico-kube-controllers-7c7c48779c-gk7jr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib47cfe966c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.674 [INFO][5310] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.674 [INFO][5310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" iface="eth0" netns="" Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.674 [INFO][5310] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.674 [INFO][5310] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.698 [INFO][5319] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" HandleID="k8s-pod-network.3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.698 [INFO][5319] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.698 [INFO][5319] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.706 [WARNING][5319] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" HandleID="k8s-pod-network.3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.706 [INFO][5319] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" HandleID="k8s-pod-network.3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.708 [INFO][5319] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:50.714826 containerd[1472]: 2026-04-13 20:10:50.712 [INFO][5310] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:50.715952 containerd[1472]: time="2026-04-13T20:10:50.714851904Z" level=info msg="TearDown network for sandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\" successfully" Apr 13 20:10:50.715952 containerd[1472]: time="2026-04-13T20:10:50.714874604Z" level=info msg="StopPodSandbox for \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\" returns successfully" Apr 13 20:10:50.715952 containerd[1472]: time="2026-04-13T20:10:50.715533514Z" level=info msg="RemovePodSandbox for \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\"" Apr 13 20:10:50.715952 containerd[1472]: time="2026-04-13T20:10:50.715555954Z" level=info msg="Forcibly stopping sandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\"" Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.752 [WARNING][5334] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0", GenerateName:"calico-kube-controllers-7c7c48779c-", Namespace:"calico-system", SelfLink:"", UID:"ab7d1268-0475-4d90-b5c2-1c8713e6aafb", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c7c48779c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"f91775743322b03418caeed0e15f35c0d015d958bf78c7e4175fccc1d3280562", Pod:"calico-kube-controllers-7c7c48779c-gk7jr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib47cfe966c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.753 [INFO][5334] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.753 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" iface="eth0" netns="" Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.753 [INFO][5334] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.753 [INFO][5334] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.774 [INFO][5341] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" HandleID="k8s-pod-network.3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.775 [INFO][5341] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.775 [INFO][5341] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.782 [WARNING][5341] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" HandleID="k8s-pod-network.3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.782 [INFO][5341] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" HandleID="k8s-pod-network.3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Workload="172--239--193--191-k8s-calico--kube--controllers--7c7c48779c--gk7jr-eth0" Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.784 [INFO][5341] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:50.791527 containerd[1472]: 2026-04-13 20:10:50.787 [INFO][5334] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578" Apr 13 20:10:50.791527 containerd[1472]: time="2026-04-13T20:10:50.789981350Z" level=info msg="TearDown network for sandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\" successfully" Apr 13 20:10:50.793716 containerd[1472]: time="2026-04-13T20:10:50.793667068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:50.793809 containerd[1472]: time="2026-04-13T20:10:50.793733658Z" level=info msg="RemovePodSandbox \"3e081a503008815482eae7a90ca6686d5093b3f7157bd7539e0d9735b2f3a578\" returns successfully" Apr 13 20:10:50.794178 containerd[1472]: time="2026-04-13T20:10:50.794148108Z" level=info msg="StopPodSandbox for \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\"" Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.830 [WARNING][5356] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-csi--node--driver--4mzgf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b083f9b4-7da6-4a64-b37b-aa5d508c2e7f", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530", Pod:"csi-node-driver-4mzgf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8206b42745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.830 [INFO][5356] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.830 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" iface="eth0" netns="" Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.831 [INFO][5356] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.831 [INFO][5356] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.854 [INFO][5363] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" HandleID="k8s-pod-network.35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.854 [INFO][5363] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.854 [INFO][5363] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.868 [WARNING][5363] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" HandleID="k8s-pod-network.35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.868 [INFO][5363] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" HandleID="k8s-pod-network.35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.871 [INFO][5363] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:50.879148 containerd[1472]: 2026-04-13 20:10:50.875 [INFO][5356] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:50.879148 containerd[1472]: time="2026-04-13T20:10:50.879046827Z" level=info msg="TearDown network for sandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\" successfully" Apr 13 20:10:50.879148 containerd[1472]: time="2026-04-13T20:10:50.879067477Z" level=info msg="StopPodSandbox for \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\" returns successfully" Apr 13 20:10:50.882496 containerd[1472]: time="2026-04-13T20:10:50.881404516Z" level=info msg="RemovePodSandbox for \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\"" Apr 13 20:10:50.882496 containerd[1472]: time="2026-04-13T20:10:50.881455626Z" level=info msg="Forcibly stopping sandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\"" Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.925 [WARNING][5379] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-csi--node--driver--4mzgf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b083f9b4-7da6-4a64-b37b-aa5d508c2e7f", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"573a9df5eda2ff7324c5839957c9e31c5a4c6750142f1d923d6236574254f530", Pod:"csi-node-driver-4mzgf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8206b42745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.925 [INFO][5379] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.925 [INFO][5379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" iface="eth0" netns="" Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.925 [INFO][5379] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.925 [INFO][5379] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.948 [INFO][5386] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" HandleID="k8s-pod-network.35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.948 [INFO][5386] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.948 [INFO][5386] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.953 [WARNING][5386] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" HandleID="k8s-pod-network.35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.953 [INFO][5386] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" HandleID="k8s-pod-network.35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Workload="172--239--193--191-k8s-csi--node--driver--4mzgf-eth0" Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.955 [INFO][5386] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:50.960401 containerd[1472]: 2026-04-13 20:10:50.957 [INFO][5379] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1" Apr 13 20:10:50.960995 containerd[1472]: time="2026-04-13T20:10:50.960435279Z" level=info msg="TearDown network for sandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\" successfully" Apr 13 20:10:50.964043 containerd[1472]: time="2026-04-13T20:10:50.964001437Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:50.964091 containerd[1472]: time="2026-04-13T20:10:50.964064247Z" level=info msg="RemovePodSandbox \"35dc507b0b3746ddfaf06cdf119c279b4f6d4512ba312ea4582b2cd202c955e1\" returns successfully" Apr 13 20:10:50.964815 containerd[1472]: time="2026-04-13T20:10:50.964791997Z" level=info msg="StopPodSandbox for \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\"" Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:50.997 [WARNING][5401] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"3d4eb2d5-db0e-4d66-8113-637f0e2427c6", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1", Pod:"goldmane-cccfbd5cf-dvmjq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c18fb6d362", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:50.997 [INFO][5401] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:50.997 [INFO][5401] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" iface="eth0" netns="" Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:50.997 [INFO][5401] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:50.997 [INFO][5401] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:51.017 [INFO][5408] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" HandleID="k8s-pod-network.215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:51.017 [INFO][5408] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:51.017 [INFO][5408] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:51.023 [WARNING][5408] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" HandleID="k8s-pod-network.215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:51.023 [INFO][5408] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" HandleID="k8s-pod-network.215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:51.025 [INFO][5408] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:51.029287 containerd[1472]: 2026-04-13 20:10:51.027 [INFO][5401] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:51.029926 containerd[1472]: time="2026-04-13T20:10:51.029322259Z" level=info msg="TearDown network for sandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\" successfully" Apr 13 20:10:51.029926 containerd[1472]: time="2026-04-13T20:10:51.029346339Z" level=info msg="StopPodSandbox for \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\" returns successfully" Apr 13 20:10:51.030110 containerd[1472]: time="2026-04-13T20:10:51.030089249Z" level=info msg="RemovePodSandbox for \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\"" Apr 13 20:10:51.030153 containerd[1472]: time="2026-04-13T20:10:51.030115469Z" level=info msg="Forcibly stopping sandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\"" Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.063 [WARNING][5423] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"3d4eb2d5-db0e-4d66-8113-637f0e2427c6", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"5b7e6f320a411b1d764968880274678c816920cd33ba65ba86f253f644936ed1", Pod:"goldmane-cccfbd5cf-dvmjq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c18fb6d362", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.063 [INFO][5423] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.064 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" iface="eth0" netns="" Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.064 [INFO][5423] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.064 [INFO][5423] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.081 [INFO][5431] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" HandleID="k8s-pod-network.215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.082 [INFO][5431] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.082 [INFO][5431] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.089 [WARNING][5431] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" HandleID="k8s-pod-network.215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.089 [INFO][5431] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" HandleID="k8s-pod-network.215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Workload="172--239--193--191-k8s-goldmane--cccfbd5cf--dvmjq-eth0" Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.090 [INFO][5431] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:51.097456 containerd[1472]: 2026-04-13 20:10:51.093 [INFO][5423] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30" Apr 13 20:10:51.097456 containerd[1472]: time="2026-04-13T20:10:51.095268763Z" level=info msg="TearDown network for sandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\" successfully" Apr 13 20:10:51.099235 containerd[1472]: time="2026-04-13T20:10:51.099213771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:51.099375 containerd[1472]: time="2026-04-13T20:10:51.099358501Z" level=info msg="RemovePodSandbox \"215b79dd96e5e4523ce0900e81d39f1315a74e837299ad0c67704e2854af4c30\" returns successfully" Apr 13 20:10:51.099923 containerd[1472]: time="2026-04-13T20:10:51.099903500Z" level=info msg="StopPodSandbox for \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\"" Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.135 [WARNING][5446] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0", GenerateName:"calico-apiserver-6df68c9d4f-", Namespace:"calico-system", SelfLink:"", UID:"41a53f62-b7f4-40f3-882b-8cc9702c76d5", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df68c9d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3", Pod:"calico-apiserver-6df68c9d4f-jpz9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4e6b0479f2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.135 [INFO][5446] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.135 [INFO][5446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" iface="eth0" netns="" Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.135 [INFO][5446] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.135 [INFO][5446] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.162 [INFO][5453] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" HandleID="k8s-pod-network.ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.162 [INFO][5453] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.163 [INFO][5453] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.169 [WARNING][5453] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" HandleID="k8s-pod-network.ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.169 [INFO][5453] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" HandleID="k8s-pod-network.ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.170 [INFO][5453] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:51.175346 containerd[1472]: 2026-04-13 20:10:51.172 [INFO][5446] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:51.175746 containerd[1472]: time="2026-04-13T20:10:51.175522238Z" level=info msg="TearDown network for sandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\" successfully" Apr 13 20:10:51.175746 containerd[1472]: time="2026-04-13T20:10:51.175547308Z" level=info msg="StopPodSandbox for \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\" returns successfully" Apr 13 20:10:51.176247 containerd[1472]: time="2026-04-13T20:10:51.176223568Z" level=info msg="RemovePodSandbox for \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\"" Apr 13 20:10:51.176333 containerd[1472]: time="2026-04-13T20:10:51.176318028Z" level=info msg="Forcibly stopping sandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\"" Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.210 [WARNING][5468] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0", GenerateName:"calico-apiserver-6df68c9d4f-", Namespace:"calico-system", SelfLink:"", UID:"41a53f62-b7f4-40f3-882b-8cc9702c76d5", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6df68c9d4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"7cab60e989a96bf240bb7334e5b01a516203906c9fbbd0b114437acd6c25c8d3", Pod:"calico-apiserver-6df68c9d4f-jpz9w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4e6b0479f2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.210 [INFO][5468] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.210 [INFO][5468] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" iface="eth0" netns="" Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.210 [INFO][5468] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.210 [INFO][5468] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.240 [INFO][5476] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" HandleID="k8s-pod-network.ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.240 [INFO][5476] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.240 [INFO][5476] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.247 [WARNING][5476] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" HandleID="k8s-pod-network.ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.247 [INFO][5476] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" HandleID="k8s-pod-network.ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Workload="172--239--193--191-k8s-calico--apiserver--6df68c9d4f--jpz9w-eth0" Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.249 [INFO][5476] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:51.254906 containerd[1472]: 2026-04-13 20:10:51.251 [INFO][5468] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c" Apr 13 20:10:51.255558 containerd[1472]: time="2026-04-13T20:10:51.254939884Z" level=info msg="TearDown network for sandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\" successfully" Apr 13 20:10:51.258103 containerd[1472]: time="2026-04-13T20:10:51.258071103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:51.258488 containerd[1472]: time="2026-04-13T20:10:51.258313363Z" level=info msg="RemovePodSandbox \"ea97d140cdd58f784012516a4ba0a07dbff62543a3c8296e817f86e9c1890a4c\" returns successfully" Apr 13 20:10:51.259009 containerd[1472]: time="2026-04-13T20:10:51.258991112Z" level=info msg="StopPodSandbox for \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\"" Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.295 [WARNING][5490] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"49c0b7cf-67f2-43e0-b1b8-972c29e78e65", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32", Pod:"coredns-66bc5c9577-glg4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c1ba5c7073", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.296 [INFO][5490] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.296 [INFO][5490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" iface="eth0" netns="" Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.296 [INFO][5490] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.296 [INFO][5490] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.317 [INFO][5498] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" HandleID="k8s-pod-network.2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.317 [INFO][5498] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.317 [INFO][5498] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.323 [WARNING][5498] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" HandleID="k8s-pod-network.2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.323 [INFO][5498] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" HandleID="k8s-pod-network.2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.324 [INFO][5498] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:51.329592 containerd[1472]: 2026-04-13 20:10:51.327 [INFO][5490] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:51.330058 containerd[1472]: time="2026-04-13T20:10:51.329641473Z" level=info msg="TearDown network for sandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\" successfully" Apr 13 20:10:51.330058 containerd[1472]: time="2026-04-13T20:10:51.329668873Z" level=info msg="StopPodSandbox for \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\" returns successfully" Apr 13 20:10:51.330952 containerd[1472]: time="2026-04-13T20:10:51.330481543Z" level=info msg="RemovePodSandbox for \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\"" Apr 13 20:10:51.330952 containerd[1472]: time="2026-04-13T20:10:51.330508343Z" level=info msg="Forcibly stopping sandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\"" Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.365 [WARNING][5512] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"49c0b7cf-67f2-43e0-b1b8-972c29e78e65", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"f6a1128b27a75a1b2c1477f1aed14c4211e6091eadc7d393f855ffaf2a46cf32", Pod:"coredns-66bc5c9577-glg4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c1ba5c7073", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.365 [INFO][5512] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.366 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" iface="eth0" netns="" Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.366 [INFO][5512] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.366 [INFO][5512] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.390 [INFO][5520] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" HandleID="k8s-pod-network.2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.390 [INFO][5520] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.390 [INFO][5520] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.396 [WARNING][5520] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" HandleID="k8s-pod-network.2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.396 [INFO][5520] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" HandleID="k8s-pod-network.2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Workload="172--239--193--191-k8s-coredns--66bc5c9577--glg4w-eth0" Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.397 [INFO][5520] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:51.402484 containerd[1472]: 2026-04-13 20:10:51.399 [INFO][5512] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6" Apr 13 20:10:51.402484 containerd[1472]: time="2026-04-13T20:10:51.402053053Z" level=info msg="TearDown network for sandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\" successfully" Apr 13 20:10:51.405881 containerd[1472]: time="2026-04-13T20:10:51.405852851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:51.406041 containerd[1472]: time="2026-04-13T20:10:51.405905851Z" level=info msg="RemovePodSandbox \"2eea5a3c175a4d4d66738686b2141614e327489fc334ae9754c4f8242a3271b6\" returns successfully" Apr 13 20:10:51.406479 containerd[1472]: time="2026-04-13T20:10:51.406455571Z" level=info msg="StopPodSandbox for \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\"" Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.440 [WARNING][5534] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5778e305-4fb3-40cf-9eb5-2894d58c2771", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb", Pod:"coredns-66bc5c9577-vdbdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68fedc07b6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.441 [INFO][5534] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.441 [INFO][5534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" iface="eth0" netns="" Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.441 [INFO][5534] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.441 [INFO][5534] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.463 [INFO][5541] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" HandleID="k8s-pod-network.45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.464 [INFO][5541] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.464 [INFO][5541] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.469 [WARNING][5541] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" HandleID="k8s-pod-network.45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.469 [INFO][5541] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" HandleID="k8s-pod-network.45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.470 [INFO][5541] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:51.475662 containerd[1472]: 2026-04-13 20:10:51.473 [INFO][5534] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:51.476571 containerd[1472]: time="2026-04-13T20:10:51.475697052Z" level=info msg="TearDown network for sandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\" successfully" Apr 13 20:10:51.476571 containerd[1472]: time="2026-04-13T20:10:51.475721272Z" level=info msg="StopPodSandbox for \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\" returns successfully" Apr 13 20:10:51.477084 containerd[1472]: time="2026-04-13T20:10:51.477064161Z" level=info msg="RemovePodSandbox for \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\"" Apr 13 20:10:51.477141 containerd[1472]: time="2026-04-13T20:10:51.477090121Z" level=info msg="Forcibly stopping sandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\"" Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.516 [WARNING][5555] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5778e305-4fb3-40cf-9eb5-2894d58c2771", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-191", ContainerID:"e254a7c46ffc2e6d77a4e21eb00daddb29179fb3279089a5456387c398272adb", Pod:"coredns-66bc5c9577-vdbdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68fedc07b6e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.517 [INFO][5555] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.517 [INFO][5555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" iface="eth0" netns="" Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.517 [INFO][5555] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.517 [INFO][5555] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.544 [INFO][5562] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" HandleID="k8s-pod-network.45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.544 [INFO][5562] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.544 [INFO][5562] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.551 [WARNING][5562] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" HandleID="k8s-pod-network.45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.551 [INFO][5562] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" HandleID="k8s-pod-network.45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Workload="172--239--193--191-k8s-coredns--66bc5c9577--vdbdt-eth0" Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.553 [INFO][5562] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:51.558634 containerd[1472]: 2026-04-13 20:10:51.556 [INFO][5555] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c" Apr 13 20:10:51.559198 containerd[1472]: time="2026-04-13T20:10:51.558635316Z" level=info msg="TearDown network for sandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\" successfully" Apr 13 20:10:51.564473 containerd[1472]: time="2026-04-13T20:10:51.562945864Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:51.564473 containerd[1472]: time="2026-04-13T20:10:51.563035944Z" level=info msg="RemovePodSandbox \"45a379fa6939709265c43d11667d4915f50e21dbbb5a0ed11870b35b6acf5a6c\" returns successfully" Apr 13 20:10:51.564831 containerd[1472]: time="2026-04-13T20:10:51.564805283Z" level=info msg="StopPodSandbox for \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\"" Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.605 [WARNING][5576] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" WorkloadEndpoint="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.605 [INFO][5576] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.605 [INFO][5576] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" iface="eth0" netns="" Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.605 [INFO][5576] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.605 [INFO][5576] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.630 [INFO][5584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" HandleID="k8s-pod-network.8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Workload="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.630 [INFO][5584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.631 [INFO][5584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.639 [WARNING][5584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" HandleID="k8s-pod-network.8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Workload="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.639 [INFO][5584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" HandleID="k8s-pod-network.8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Workload="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.641 [INFO][5584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:51.646249 containerd[1472]: 2026-04-13 20:10:51.643 [INFO][5576] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:51.647351 containerd[1472]: time="2026-04-13T20:10:51.646278707Z" level=info msg="TearDown network for sandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\" successfully" Apr 13 20:10:51.647351 containerd[1472]: time="2026-04-13T20:10:51.646311627Z" level=info msg="StopPodSandbox for \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\" returns successfully" Apr 13 20:10:51.647637 containerd[1472]: time="2026-04-13T20:10:51.647559397Z" level=info msg="RemovePodSandbox for \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\"" Apr 13 20:10:51.647709 containerd[1472]: time="2026-04-13T20:10:51.647640597Z" level=info msg="Forcibly stopping sandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\"" Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.682 [WARNING][5598] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" WorkloadEndpoint="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.683 [INFO][5598] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.683 [INFO][5598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" iface="eth0" netns="" Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.683 [INFO][5598] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.683 [INFO][5598] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.711 [INFO][5606] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" HandleID="k8s-pod-network.8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Workload="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.711 [INFO][5606] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.712 [INFO][5606] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.718 [WARNING][5606] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" HandleID="k8s-pod-network.8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Workload="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.718 [INFO][5606] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" HandleID="k8s-pod-network.8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Workload="172--239--193--191-k8s-whisker--75b5998949--fr9s5-eth0" Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.719 [INFO][5606] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:51.725528 containerd[1472]: 2026-04-13 20:10:51.722 [INFO][5598] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d" Apr 13 20:10:51.725528 containerd[1472]: time="2026-04-13T20:10:51.725395894Z" level=info msg="TearDown network for sandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\" successfully" Apr 13 20:10:51.729806 containerd[1472]: time="2026-04-13T20:10:51.729768061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:51.729867 containerd[1472]: time="2026-04-13T20:10:51.729820791Z" level=info msg="RemovePodSandbox \"8ceaf25b977c18296241037fab98d0eceb03f5befcb4195856d2ef4991c45d7d\" returns successfully" Apr 13 20:11:08.480182 kubelet[2578]: E0413 20:11:08.480130 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:11:09.827556 systemd[1]: run-containerd-runc-k8s.io-3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c-runc.ZWHAHr.mount: Deactivated successfully. Apr 13 20:11:11.830744 kubelet[2578]: I0413 20:11:11.830303 2578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:11:14.478907 kubelet[2578]: E0413 20:11:14.478230 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:11:18.479339 kubelet[2578]: E0413 20:11:18.479083 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:11:18.512036 systemd[1]: run-containerd-runc-k8s.io-d057a55295c363bfb72035b28c861c193da4fd66947d28e438cf5d0443c29d05-runc.BolTD1.mount: Deactivated successfully. Apr 13 20:11:20.479151 kubelet[2578]: E0413 20:11:20.478631 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:11:38.478489 kubelet[2578]: E0413 20:11:38.478164 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:11:39.821219 systemd[1]: run-containerd-runc-k8s.io-3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c-runc.8vQzN1.mount: Deactivated successfully. Apr 13 20:11:46.479578 kubelet[2578]: E0413 20:11:46.479464 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:11:48.514202 systemd[1]: run-containerd-runc-k8s.io-d057a55295c363bfb72035b28c861c193da4fd66947d28e438cf5d0443c29d05-runc.8XakB7.mount: Deactivated successfully. Apr 13 20:11:49.478515 kubelet[2578]: E0413 20:11:49.478349 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:12:02.749310 systemd[1]: run-containerd-runc-k8s.io-3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c-runc.GMf9KU.mount: Deactivated successfully. Apr 13 20:12:02.924834 systemd[1]: run-containerd-runc-k8s.io-d057a55295c363bfb72035b28c861c193da4fd66947d28e438cf5d0443c29d05-runc.edCzoL.mount: Deactivated successfully. Apr 13 20:12:06.118694 systemd[1]: Started sshd@7-172.239.193.191:22-50.85.169.122:56534.service - OpenSSH per-connection server daemon (50.85.169.122:56534). Apr 13 20:12:06.836300 sshd[5926]: Accepted publickey for core from 50.85.169.122 port 56534 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:06.838656 sshd[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:06.844776 systemd-logind[1464]: New session 8 of user core. Apr 13 20:12:06.848555 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:12:07.417479 sshd[5926]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:07.422744 systemd[1]: sshd@7-172.239.193.191:22-50.85.169.122:56534.service: Deactivated successfully. Apr 13 20:12:07.426154 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:12:07.427145 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:12:07.428516 systemd-logind[1464]: Removed session 8. Apr 13 20:12:12.547706 systemd[1]: Started sshd@8-172.239.193.191:22-50.85.169.122:54044.service - OpenSSH per-connection server daemon (50.85.169.122:54044). Apr 13 20:12:13.260856 sshd[5960]: Accepted publickey for core from 50.85.169.122 port 54044 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:13.262620 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:13.269488 systemd-logind[1464]: New session 9 of user core. Apr 13 20:12:13.275666 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:12:13.854861 sshd[5960]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:13.862093 systemd[1]: sshd@8-172.239.193.191:22-50.85.169.122:54044.service: Deactivated successfully. Apr 13 20:12:13.865138 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:12:13.866908 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:12:13.872789 systemd-logind[1464]: Removed session 9. Apr 13 20:12:18.986518 systemd[1]: Started sshd@9-172.239.193.191:22-50.85.169.122:54060.service - OpenSSH per-connection server daemon (50.85.169.122:54060). Apr 13 20:12:19.693458 sshd[5993]: Accepted publickey for core from 50.85.169.122 port 54060 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:19.694603 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:19.699326 systemd-logind[1464]: New session 10 of user core. Apr 13 20:12:19.705550 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:12:20.256920 sshd[5993]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:20.261341 systemd[1]: sshd@9-172.239.193.191:22-50.85.169.122:54060.service: Deactivated successfully. Apr 13 20:12:20.263628 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:12:20.264453 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:12:20.265328 systemd-logind[1464]: Removed session 10. Apr 13 20:12:20.388663 systemd[1]: Started sshd@10-172.239.193.191:22-50.85.169.122:39394.service - OpenSSH per-connection server daemon (50.85.169.122:39394). Apr 13 20:12:21.102323 sshd[6023]: Accepted publickey for core from 50.85.169.122 port 39394 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:21.104077 sshd[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:21.109486 systemd-logind[1464]: New session 11 of user core. Apr 13 20:12:21.114569 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:12:21.702304 sshd[6023]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:21.706160 systemd[1]: sshd@10-172.239.193.191:22-50.85.169.122:39394.service: Deactivated successfully. Apr 13 20:12:21.709323 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:12:21.713754 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:12:21.714888 systemd-logind[1464]: Removed session 11. Apr 13 20:12:21.833083 systemd[1]: Started sshd@11-172.239.193.191:22-50.85.169.122:39396.service - OpenSSH per-connection server daemon (50.85.169.122:39396). Apr 13 20:12:22.477542 kubelet[2578]: E0413 20:12:22.477374 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:12:22.537945 sshd[6034]: Accepted publickey for core from 50.85.169.122 port 39396 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:22.539623 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:22.544807 systemd-logind[1464]: New session 12 of user core. Apr 13 20:12:22.552592 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:12:23.123414 sshd[6034]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:23.127800 systemd[1]: sshd@11-172.239.193.191:22-50.85.169.122:39396.service: Deactivated successfully. Apr 13 20:12:23.130747 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:12:23.131529 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:12:23.133125 systemd-logind[1464]: Removed session 12. Apr 13 20:12:25.122035 systemd[1]: run-containerd-runc-k8s.io-f32a3687dfe2845ae4d3ef400d0cc5bc47532248408a9102a23197c5359b17d2-runc.1Bb05z.mount: Deactivated successfully. Apr 13 20:12:28.249070 systemd[1]: Started sshd@12-172.239.193.191:22-50.85.169.122:39404.service - OpenSSH per-connection server daemon (50.85.169.122:39404). Apr 13 20:12:28.973821 sshd[6067]: Accepted publickey for core from 50.85.169.122 port 39404 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:28.975591 sshd[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:28.983485 systemd-logind[1464]: New session 13 of user core. Apr 13 20:12:28.989603 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:12:29.544062 sshd[6067]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:29.548666 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:12:29.549662 systemd[1]: sshd@12-172.239.193.191:22-50.85.169.122:39404.service: Deactivated successfully. Apr 13 20:12:29.554995 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:12:29.556160 systemd-logind[1464]: Removed session 13. Apr 13 20:12:29.674837 systemd[1]: Started sshd@13-172.239.193.191:22-50.85.169.122:43366.service - OpenSSH per-connection server daemon (50.85.169.122:43366). Apr 13 20:12:30.385269 sshd[6083]: Accepted publickey for core from 50.85.169.122 port 43366 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:30.388050 sshd[6083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:30.394169 systemd-logind[1464]: New session 14 of user core. Apr 13 20:12:30.398723 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:12:31.112106 sshd[6083]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:31.116666 systemd[1]: sshd@13-172.239.193.191:22-50.85.169.122:43366.service: Deactivated successfully. Apr 13 20:12:31.122867 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:12:31.123654 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:12:31.124731 systemd-logind[1464]: Removed session 14. Apr 13 20:12:31.247933 systemd[1]: Started sshd@14-172.239.193.191:22-50.85.169.122:43382.service - OpenSSH per-connection server daemon (50.85.169.122:43382). Apr 13 20:12:31.972466 sshd[6094]: Accepted publickey for core from 50.85.169.122 port 43382 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:31.974351 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:31.980210 systemd-logind[1464]: New session 15 of user core. Apr 13 20:12:31.985578 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:12:33.061895 sshd[6094]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:33.066399 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:12:33.066808 systemd[1]: sshd@14-172.239.193.191:22-50.85.169.122:43382.service: Deactivated successfully. Apr 13 20:12:33.069571 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:12:33.070354 systemd-logind[1464]: Removed session 15. Apr 13 20:12:33.192691 systemd[1]: Started sshd@15-172.239.193.191:22-50.85.169.122:43388.service - OpenSSH per-connection server daemon (50.85.169.122:43388). Apr 13 20:12:33.905112 sshd[6119]: Accepted publickey for core from 50.85.169.122 port 43388 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:33.907546 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:33.916679 systemd-logind[1464]: New session 16 of user core. Apr 13 20:12:33.925858 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:12:34.587210 sshd[6119]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:34.592868 systemd[1]: sshd@15-172.239.193.191:22-50.85.169.122:43388.service: Deactivated successfully. Apr 13 20:12:34.595967 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:12:34.596973 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:12:34.598260 systemd-logind[1464]: Removed session 16. Apr 13 20:12:34.720786 systemd[1]: Started sshd@16-172.239.193.191:22-50.85.169.122:43402.service - OpenSSH per-connection server daemon (50.85.169.122:43402). Apr 13 20:12:35.442771 sshd[6132]: Accepted publickey for core from 50.85.169.122 port 43402 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:35.443457 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:35.447530 systemd-logind[1464]: New session 17 of user core. Apr 13 20:12:35.457793 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:12:35.477919 kubelet[2578]: E0413 20:12:35.477685 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:12:36.019200 sshd[6132]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:36.026514 systemd[1]: sshd@16-172.239.193.191:22-50.85.169.122:43402.service: Deactivated successfully. Apr 13 20:12:36.029730 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:12:36.031328 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:12:36.032564 systemd-logind[1464]: Removed session 17. Apr 13 20:12:39.478154 kubelet[2578]: E0413 20:12:39.478064 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:12:41.150805 systemd[1]: Started sshd@17-172.239.193.191:22-50.85.169.122:59782.service - OpenSSH per-connection server daemon (50.85.169.122:59782). Apr 13 20:12:41.858486 sshd[6167]: Accepted publickey for core from 50.85.169.122 port 59782 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:41.861327 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:41.868664 systemd-logind[1464]: New session 18 of user core. Apr 13 20:12:41.879590 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:12:42.417608 sshd[6167]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:42.423096 systemd[1]: sshd@17-172.239.193.191:22-50.85.169.122:59782.service: Deactivated successfully. Apr 13 20:12:42.426511 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:12:42.427203 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:12:42.428277 systemd-logind[1464]: Removed session 18. Apr 13 20:12:47.558144 systemd[1]: Started sshd@18-172.239.193.191:22-50.85.169.122:59798.service - OpenSSH per-connection server daemon (50.85.169.122:59798). Apr 13 20:12:48.267463 sshd[6180]: Accepted publickey for core from 50.85.169.122 port 59798 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:48.269511 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:48.274939 systemd-logind[1464]: New session 19 of user core. Apr 13 20:12:48.282695 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:12:48.842366 sshd[6180]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:48.848158 systemd[1]: sshd@18-172.239.193.191:22-50.85.169.122:59798.service: Deactivated successfully. Apr 13 20:12:48.850972 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:12:48.852163 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:12:48.853143 systemd-logind[1464]: Removed session 19. Apr 13 20:12:50.478457 kubelet[2578]: E0413 20:12:50.477902 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:12:52.479540 kubelet[2578]: E0413 20:12:52.478394 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:12:53.972659 systemd[1]: Started sshd@19-172.239.193.191:22-50.85.169.122:35620.service - OpenSSH per-connection server daemon (50.85.169.122:35620). Apr 13 20:12:54.683878 sshd[6214]: Accepted publickey for core from 50.85.169.122 port 35620 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:54.686737 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:54.691640 systemd-logind[1464]: New session 20 of user core. Apr 13 20:12:54.699577 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:12:55.132272 systemd[1]: run-containerd-runc-k8s.io-f32a3687dfe2845ae4d3ef400d0cc5bc47532248408a9102a23197c5359b17d2-runc.TeJAyc.mount: Deactivated successfully. Apr 13 20:12:55.322745 sshd[6214]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:55.326603 systemd[1]: sshd@19-172.239.193.191:22-50.85.169.122:35620.service: Deactivated successfully. Apr 13 20:12:55.329283 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:12:55.331316 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:12:55.332993 systemd-logind[1464]: Removed session 20. Apr 13 20:12:56.478511 kubelet[2578]: E0413 20:12:56.478392 2578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Apr 13 20:13:00.457660 systemd[1]: Started sshd@20-172.239.193.191:22-50.85.169.122:48454.service - OpenSSH per-connection server daemon (50.85.169.122:48454). Apr 13 20:13:01.162965 sshd[6249]: Accepted publickey for core from 50.85.169.122 port 48454 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:13:01.164747 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:13:01.172631 systemd-logind[1464]: New session 21 of user core. Apr 13 20:13:01.177580 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:13:01.728279 sshd[6249]: pam_unix(sshd:session): session closed for user core Apr 13 20:13:01.735103 systemd[1]: sshd@20-172.239.193.191:22-50.85.169.122:48454.service: Deactivated successfully. Apr 13 20:13:01.737294 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:13:01.738436 systemd-logind[1464]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:13:01.739407 systemd-logind[1464]: Removed session 21. Apr 13 20:13:02.741012 systemd[1]: run-containerd-runc-k8s.io-3bdcf9c46d4ce3a8ca2e7acb97836955ff4565a0ca72c01e98068a4eba4a839c-runc.aUDie4.mount: Deactivated successfully.