Apr 13 20:09:06.972750 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:09:06.972771 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:06.972780 kernel: BIOS-provided physical RAM map: Apr 13 20:09:06.972786 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 13 20:09:06.972792 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 13 20:09:06.972800 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 13 20:09:06.972807 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 13 20:09:06.972813 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 13 20:09:06.972818 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 20:09:06.972824 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 13 20:09:06.972843 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:09:06.972865 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 13 20:09:06.972871 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 13 20:09:06.972880 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 13 20:09:06.972887 kernel: NX (Execute Disable) protection: active Apr 13 20:09:06.972893 kernel: APIC: Static calls initialized Apr 13 20:09:06.972899 kernel: SMBIOS 2.8 present. Apr 13 20:09:06.972906 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 13 20:09:06.972912 kernel: Hypervisor detected: KVM Apr 13 20:09:06.972920 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:09:06.972926 kernel: kvm-clock: using sched offset of 5602788488 cycles Apr 13 20:09:06.972932 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:09:06.972938 kernel: tsc: Detected 1999.998 MHz processor Apr 13 20:09:06.972945 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:09:06.972951 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:09:06.972957 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 13 20:09:06.972964 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 13 20:09:06.972970 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:09:06.972978 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 13 20:09:06.972985 kernel: Using GB pages for direct mapping Apr 13 20:09:06.972991 kernel: ACPI: Early table checksum verification disabled Apr 13 20:09:06.972997 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 13 20:09:06.973003 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973010 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973016 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973022 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 13 20:09:06.973028 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973036 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973042 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973049 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973058 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 13 20:09:06.973065 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 13 20:09:06.973071 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 13 20:09:06.973080 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 13 20:09:06.973087 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 13 20:09:06.973093 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 13 20:09:06.973100 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 13 20:09:06.973106 kernel: No NUMA configuration found Apr 13 20:09:06.973113 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 13 20:09:06.973119 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Apr 13 20:09:06.973125 kernel: Zone ranges: Apr 13 20:09:06.973134 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:09:06.973141 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:09:06.973147 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:09:06.973153 kernel: Movable zone start for each node Apr 13 20:09:06.973160 kernel: Early memory node ranges Apr 13 20:09:06.973166 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 13 20:09:06.973172 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 13 20:09:06.973179 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:09:06.973185 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 13 20:09:06.973191 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:09:06.973200 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 13 20:09:06.973207 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 13 20:09:06.973213 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:09:06.973219 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:09:06.973226 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:09:06.973232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:09:06.973239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:09:06.973245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:09:06.973252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:09:06.973261 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:09:06.973267 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:09:06.973273 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:09:06.973280 kernel: TSC deadline timer available Apr 13 20:09:06.973286 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:09:06.973292 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:09:06.973299 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 20:09:06.973305 kernel: kvm-guest: setup PV sched yield Apr 13 20:09:06.973312 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 13 20:09:06.973320 kernel: Booting paravirtualized kernel on KVM Apr 13 20:09:06.973327 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:09:06.973334 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:09:06.973340 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:09:06.973346 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:09:06.973353 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:09:06.973359 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:09:06.973366 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:09:06.973373 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:06.973382 kernel: random: crng init done Apr 13 20:09:06.973388 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:09:06.973395 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:09:06.973401 kernel: Fallback order for Node 0: 0 Apr 13 20:09:06.973408 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 13 20:09:06.973414 kernel: Policy zone: Normal Apr 13 20:09:06.973421 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:09:06.973427 kernel: software IO TLB: area num 2. Apr 13 20:09:06.973436 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 227292K reserved, 0K cma-reserved) Apr 13 20:09:06.973443 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:09:06.973449 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:09:06.973456 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:09:06.973462 kernel: Dynamic Preempt: voluntary Apr 13 20:09:06.973468 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:09:06.973475 kernel: rcu: RCU event tracing is enabled. Apr 13 20:09:06.973482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:09:06.973489 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:09:06.973498 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:09:06.973504 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:09:06.973511 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:09:06.973517 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:09:06.973524 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:09:06.973530 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:09:06.973536 kernel: Console: colour VGA+ 80x25 Apr 13 20:09:06.973543 kernel: printk: console [tty0] enabled Apr 13 20:09:06.973549 kernel: printk: console [ttyS0] enabled Apr 13 20:09:06.973558 kernel: ACPI: Core revision 20230628 Apr 13 20:09:06.973565 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 20:09:06.973571 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:09:06.973578 kernel: x2apic enabled Apr 13 20:09:06.973592 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:09:06.973601 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 20:09:06.973608 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 20:09:06.973615 kernel: kvm-guest: setup PV IPIs Apr 13 20:09:06.973621 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 20:09:06.973628 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 13 20:09:06.973634 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Apr 13 20:09:06.973641 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:09:06.973651 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 13 20:09:06.973657 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 13 20:09:06.973664 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:09:06.973671 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 20:09:06.973678 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:09:06.973687 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 13 20:09:06.973694 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:09:06.973701 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:09:06.973707 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 13 20:09:06.973715 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 13 20:09:06.973721 kernel: active return thunk: srso_alias_return_thunk Apr 13 20:09:06.973728 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 13 20:09:06.973735 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 13 20:09:06.973745 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:09:06.973751 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:09:06.973758 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:09:06.973765 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:09:06.973772 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:09:06.973779 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:09:06.973785 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 13 20:09:06.973792 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 13 20:09:06.973799 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:09:06.973808 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:09:06.973815 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:09:06.973822 kernel: landlock: Up and running. Apr 13 20:09:06.973829 kernel: SELinux: Initializing. Apr 13 20:09:06.973870 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:09:06.973877 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:09:06.973884 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 13 20:09:06.973891 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:06.973898 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:06.973908 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:06.973915 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 13 20:09:06.973922 kernel: ... version: 0 Apr 13 20:09:06.973928 kernel: ... bit width: 48 Apr 13 20:09:06.973935 kernel: ... generic registers: 6 Apr 13 20:09:06.973942 kernel: ... value mask: 0000ffffffffffff Apr 13 20:09:06.973949 kernel: ... max period: 00007fffffffffff Apr 13 20:09:06.973955 kernel: ... fixed-purpose events: 0 Apr 13 20:09:06.973962 kernel: ... event mask: 000000000000003f Apr 13 20:09:06.973971 kernel: signal: max sigframe size: 3376 Apr 13 20:09:06.973978 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:09:06.973985 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:09:06.973992 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:09:06.973998 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:09:06.974005 kernel: .... node #0, CPUs: #1 Apr 13 20:09:06.974012 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:09:06.974018 kernel: smpboot: Max logical packages: 1 Apr 13 20:09:06.974025 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Apr 13 20:09:06.974034 kernel: devtmpfs: initialized Apr 13 20:09:06.974041 kernel: x86/mm: Memory block size: 128MB Apr 13 20:09:06.974048 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:09:06.974055 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:09:06.974062 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:09:06.974068 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:09:06.974075 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:09:06.974082 kernel: audit: type=2000 audit(1776110946.233:1): state=initialized audit_enabled=0 res=1 Apr 13 20:09:06.974089 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:09:06.974098 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:09:06.974105 kernel: cpuidle: using governor menu Apr 13 20:09:06.974112 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:09:06.974118 kernel: dca service started, version 1.12.1 Apr 13 20:09:06.974125 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 20:09:06.974132 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 20:09:06.974138 kernel: PCI: Using configuration type 1 for base access Apr 13 20:09:06.974145 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:09:06.974152 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:09:06.974162 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:09:06.974168 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:09:06.974175 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:09:06.974182 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:09:06.974189 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:09:06.974196 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:09:06.974202 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:09:06.974209 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:09:06.974216 kernel: ACPI: Interpreter enabled Apr 13 20:09:06.974225 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:09:06.974232 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:09:06.974238 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:09:06.974245 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:09:06.974252 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:09:06.974259 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:09:06.974443 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:09:06.974586 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 20:09:06.974720 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 20:09:06.974730 kernel: PCI host bridge to bus 0000:00 Apr 13 20:09:06.974899 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:09:06.975024 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:09:06.975141 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:09:06.975255 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 13 20:09:06.975370 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 20:09:06.975492 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 13 20:09:06.975608 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:09:06.975751 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:09:06.975920 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 20:09:06.976052 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 13 20:09:06.976178 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 13 20:09:06.976333 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 13 20:09:06.976462 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:09:06.976599 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 13 20:09:06.976727 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:09:06.976881 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 13 20:09:06.977011 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 13 20:09:06.977146 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:09:06.977279 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:09:06.977446 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 13 20:09:06.977574 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 13 20:09:06.977700 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 13 20:09:06.977872 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:09:06.978010 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:09:06.978144 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:09:06.978275 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 13 20:09:06.978399 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 13 20:09:06.978532 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:09:06.978657 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 13 20:09:06.978667 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:09:06.978674 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:09:06.978681 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:09:06.978691 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:09:06.978698 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:09:06.978705 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:09:06.978711 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:09:06.978718 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:09:06.978725 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:09:06.978731 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:09:06.978739 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:09:06.978745 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:09:06.978755 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:09:06.978761 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:09:06.978768 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:09:06.978775 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:09:06.978782 kernel: iommu: Default domain type: Translated Apr 13 20:09:06.978788 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:09:06.978795 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:09:06.978802 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:09:06.978809 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 13 20:09:06.978818 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 13 20:09:06.979025 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:09:06.979152 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:09:06.979276 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:09:06.979286 kernel: vgaarb: loaded Apr 13 20:09:06.979293 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 20:09:06.979300 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 20:09:06.979306 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:09:06.979318 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:09:06.979325 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:09:06.979332 kernel: pnp: PnP ACPI init Apr 13 20:09:06.979466 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 20:09:06.979477 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:09:06.979484 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:09:06.979490 kernel: NET: Registered PF_INET protocol family Apr 13 20:09:06.979497 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:09:06.979508 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 20:09:06.979515 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:09:06.979522 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:09:06.979529 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 20:09:06.979535 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 20:09:06.979542 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:09:06.979549 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:09:06.979556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:09:06.979563 kernel: NET: Registered PF_XDP protocol family Apr 13 20:09:06.979681 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:09:06.979796 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:09:06.979967 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:09:06.980084 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 13 20:09:06.980199 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 20:09:06.980313 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 13 20:09:06.980323 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:09:06.980330 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:09:06.980341 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 13 20:09:06.980348 kernel: Initialise system trusted keyrings Apr 13 20:09:06.980355 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 20:09:06.980362 kernel: Key type asymmetric registered Apr 13 20:09:06.980368 kernel: Asymmetric key parser 'x509' registered Apr 13 20:09:06.980375 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:09:06.980382 kernel: io scheduler mq-deadline registered Apr 13 20:09:06.980389 kernel: io scheduler kyber registered Apr 13 20:09:06.980396 kernel: io scheduler bfq registered Apr 13 20:09:06.980402 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:09:06.980412 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:09:06.980419 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:09:06.980426 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:09:06.980433 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:09:06.980440 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:09:06.980447 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:09:06.980454 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:09:06.980582 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:09:06.980596 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:09:06.980713 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:09:06.980873 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:09:06 UTC (1776110946) Apr 13 20:09:06.981001 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 20:09:06.981012 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 13 20:09:06.981019 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:09:06.981025 kernel: Segment Routing with IPv6 Apr 13 20:09:06.981032 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:09:06.981043 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:09:06.981050 kernel: Key type dns_resolver registered Apr 13 20:09:06.981057 kernel: IPI shorthand broadcast: enabled Apr 13 20:09:06.981064 kernel: sched_clock: Marking stable (873005788, 315006365)->(1316800407, -128788254) Apr 13 20:09:06.981070 kernel: registered taskstats version 1 Apr 13 20:09:06.981077 kernel: Loading compiled-in X.509 certificates Apr 13 20:09:06.981084 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:09:06.981091 kernel: Key type .fscrypt registered Apr 13 20:09:06.981098 kernel: Key type fscrypt-provisioning registered Apr 13 20:09:06.981108 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:09:06.981115 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:09:06.981121 kernel: ima: No architecture policies found Apr 13 20:09:06.981128 kernel: clk: Disabling unused clocks Apr 13 20:09:06.981135 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:09:06.981142 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:09:06.981149 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:09:06.981155 kernel: Run /init as init process Apr 13 20:09:06.981162 kernel: with arguments: Apr 13 20:09:06.981171 kernel: /init Apr 13 20:09:06.981178 kernel: with environment: Apr 13 20:09:06.981185 kernel: HOME=/ Apr 13 20:09:06.981191 kernel: TERM=linux Apr 13 20:09:06.981200 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:09:06.981209 systemd[1]: Detected virtualization kvm. Apr 13 20:09:06.981216 systemd[1]: Detected architecture x86-64. Apr 13 20:09:06.981223 systemd[1]: Running in initrd. Apr 13 20:09:06.981233 systemd[1]: No hostname configured, using default hostname. Apr 13 20:09:06.981240 systemd[1]: Hostname set to . Apr 13 20:09:06.981247 systemd[1]: Initializing machine ID from random generator. Apr 13 20:09:06.981254 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:09:06.981262 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:09:06.981283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:09:06.981296 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:09:06.981304 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:09:06.981312 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:09:06.981319 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:09:06.981328 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:09:06.981336 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:09:06.981346 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:09:06.981353 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:09:06.981360 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:09:06.981368 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:09:06.981375 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:09:06.981383 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:09:06.981390 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:09:06.981397 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:09:06.981405 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:09:06.981415 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:09:06.981422 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:09:06.981430 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:09:06.981437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:09:06.981444 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:09:06.981452 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:09:06.981459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:09:06.981467 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:09:06.981474 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:09:06.981484 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:09:06.981491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:09:06.981518 systemd-journald[178]: Collecting audit messages is disabled. Apr 13 20:09:06.981535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:06.981547 systemd-journald[178]: Journal started Apr 13 20:09:06.981565 systemd-journald[178]: Runtime Journal (/run/log/journal/2f143277b8f548e58a5df3443cc36d46) is 8.0M, max 78.3M, 70.3M free. Apr 13 20:09:06.987944 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:09:06.987234 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:09:06.989279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:09:06.991381 systemd-modules-load[179]: Inserted module 'overlay' Apr 13 20:09:06.995032 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:09:06.999974 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:09:07.008975 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:09:07.021206 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:09:07.113363 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:09:07.113390 kernel: Bridge firewalling registered Apr 13 20:09:07.030746 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 13 20:09:07.115345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:09:07.116354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:07.118080 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:09:07.125950 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:07.127966 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:09:07.154977 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:09:07.158885 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:07.162988 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:09:07.164823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:09:07.165761 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:09:07.174398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:09:07.184902 dracut-cmdline[209]: dracut-dracut-053 Apr 13 20:09:07.188364 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:07.205128 systemd-resolved[212]: Positive Trust Anchors: Apr 13 20:09:07.205144 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:09:07.205171 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:09:07.208531 systemd-resolved[212]: Defaulting to hostname 'linux'. Apr 13 20:09:07.209734 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:09:07.210599 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:09:07.261883 kernel: SCSI subsystem initialized Apr 13 20:09:07.271856 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:09:07.285865 kernel: iscsi: registered transport (tcp) Apr 13 20:09:07.305988 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:09:07.306044 kernel: QLogic iSCSI HBA Driver Apr 13 20:09:07.351445 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:09:07.356994 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:09:07.383102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:09:07.383147 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:09:07.385209 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:09:07.427868 kernel: raid6: avx2x4 gen() 32639 MB/s Apr 13 20:09:07.445859 kernel: raid6: avx2x2 gen() 30074 MB/s Apr 13 20:09:07.464013 kernel: raid6: avx2x1 gen() 22873 MB/s Apr 13 20:09:07.464051 kernel: raid6: using algorithm avx2x4 gen() 32639 MB/s Apr 13 20:09:07.484179 kernel: raid6: .... xor() 5178 MB/s, rmw enabled Apr 13 20:09:07.484209 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:09:07.507861 kernel: xor: automatically using best checksumming function avx Apr 13 20:09:07.631867 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:09:07.643098 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:09:07.653002 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:09:07.664925 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 13 20:09:07.669509 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:09:07.677135 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:09:07.692274 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 13 20:09:07.723692 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:09:07.727976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:09:07.798044 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:09:07.810136 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:09:07.826628 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:09:07.829783 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:09:07.832679 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:09:07.834377 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:09:07.845008 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:09:07.860803 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:09:07.883860 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:09:08.071957 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:09:08.078677 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 20:09:08.125512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:09:08.125654 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:08.128476 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:08.129355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:09:08.129505 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:08.135585 kernel: libata version 3.00 loaded. Apr 13 20:09:08.130487 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:08.140214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:08.159865 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:09:08.160127 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:09:08.170878 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:09:08.171103 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:09:08.176888 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:09:08.176917 kernel: AES CTR mode by8 optimization enabled Apr 13 20:09:08.183869 kernel: scsi host1: ahci Apr 13 20:09:08.186872 kernel: scsi host2: ahci Apr 13 20:09:08.190876 kernel: scsi host3: ahci Apr 13 20:09:08.191192 kernel: scsi host4: ahci Apr 13 20:09:08.194893 kernel: scsi host5: ahci Apr 13 20:09:08.197254 kernel: scsi host6: ahci Apr 13 20:09:08.198297 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 13 20:09:08.198327 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 13 20:09:08.198345 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 13 20:09:08.198363 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 13 20:09:08.198379 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 13 20:09:08.198395 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 13 20:09:08.304326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:08.311022 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:08.329090 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:08.515864 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.515929 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.515942 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.515952 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.516860 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.518868 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.536565 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 13 20:09:08.561773 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 13 20:09:08.562080 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 20:09:08.564058 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 13 20:09:08.564310 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:09:08.573643 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:09:08.573680 kernel: GPT:9289727 != 167739391 Apr 13 20:09:08.573692 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:09:08.577849 kernel: GPT:9289727 != 167739391 Apr 13 20:09:08.577879 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:09:08.582248 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:08.584056 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 20:09:08.619881 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (463) Apr 13 20:09:08.623887 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (445) Apr 13 20:09:08.627850 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 20:09:08.637000 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 20:09:08.643636 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 20:09:08.645378 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 20:09:08.651606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:09:08.666027 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:09:08.671710 disk-uuid[568]: Primary Header is updated. Apr 13 20:09:08.671710 disk-uuid[568]: Secondary Entries is updated. Apr 13 20:09:08.671710 disk-uuid[568]: Secondary Header is updated. Apr 13 20:09:08.677886 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:08.684873 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:09.688930 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:09.689905 disk-uuid[569]: The operation has completed successfully. Apr 13 20:09:09.745453 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:09:09.745627 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:09:09.771012 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:09:09.775680 sh[583]: Success Apr 13 20:09:09.791859 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 20:09:09.838567 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:09:09.846941 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:09:09.848063 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:09:09.866439 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:09:09.866474 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:09.869405 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:09:09.874623 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:09:09.874651 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:09:09.884858 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:09:09.887120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:09:09.888448 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:09:09.893963 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:09:09.897114 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:09:09.909865 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:09.915807 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:09.915856 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:09.926162 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:09.926193 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:09.937380 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:09:09.941273 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:09.948320 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:09:09.956464 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:09:10.034918 ignition[685]: Ignition 2.19.0 Apr 13 20:09:10.034931 ignition[685]: Stage: fetch-offline Apr 13 20:09:10.034985 ignition[685]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:10.039369 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:09:10.034997 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:10.035082 ignition[685]: parsed url from cmdline: "" Apr 13 20:09:10.035086 ignition[685]: no config URL provided Apr 13 20:09:10.035092 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:09:10.035101 ignition[685]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:09:10.035107 ignition[685]: failed to fetch config: resource requires networking Apr 13 20:09:10.035305 ignition[685]: Ignition finished successfully Apr 13 20:09:10.046633 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:09:10.055045 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:09:10.078699 systemd-networkd[770]: lo: Link UP Apr 13 20:09:10.078717 systemd-networkd[770]: lo: Gained carrier Apr 13 20:09:10.081246 systemd-networkd[770]: Enumeration completed Apr 13 20:09:10.081811 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:10.081817 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:09:10.083488 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:09:10.084951 systemd-networkd[770]: eth0: Link UP Apr 13 20:09:10.084959 systemd-networkd[770]: eth0: Gained carrier Apr 13 20:09:10.084970 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:10.087451 systemd[1]: Reached target network.target - Network. Apr 13 20:09:10.097050 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:09:10.111711 ignition[772]: Ignition 2.19.0 Apr 13 20:09:10.111725 ignition[772]: Stage: fetch Apr 13 20:09:10.111926 ignition[772]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:10.111942 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:10.112048 ignition[772]: parsed url from cmdline: "" Apr 13 20:09:10.112054 ignition[772]: no config URL provided Apr 13 20:09:10.112062 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:09:10.112076 ignition[772]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:09:10.112100 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 13 20:09:10.112289 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:10.312489 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 13 20:09:10.312656 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:10.713316 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 13 20:09:10.713480 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:10.815917 systemd-networkd[770]: eth0: DHCPv4 address 172.239.193.192/24, gateway 172.239.193.1 acquired from 23.205.167.133 Apr 13 20:09:11.514245 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 13 20:09:11.610602 ignition[772]: PUT result: OK Apr 13 20:09:11.610672 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 13 20:09:11.724943 ignition[772]: GET result: OK Apr 13 20:09:11.725940 ignition[772]: parsing config with SHA512: 8f3b1e6f26b8406dfc01fd819f0bcd4a369f54b0817ead7d6611553cf41df993ffccc22b18f612f49c842c4cda0fc588852859f16f1a4d05df0f4fea72bef299 Apr 13 20:09:11.728960 unknown[772]: fetched base config from "system" Apr 13 20:09:11.728970 unknown[772]: fetched base config from "system" Apr 13 20:09:11.729218 ignition[772]: fetch: fetch complete Apr 13 20:09:11.728976 unknown[772]: fetched user config from "akamai" Apr 13 20:09:11.729224 ignition[772]: fetch: fetch passed Apr 13 20:09:11.729268 ignition[772]: Ignition finished successfully Apr 13 20:09:11.733944 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:09:11.739980 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:09:11.753300 ignition[779]: Ignition 2.19.0 Apr 13 20:09:11.753316 ignition[779]: Stage: kargs Apr 13 20:09:11.753474 ignition[779]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:11.753485 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:11.755434 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:09:11.754171 ignition[779]: kargs: kargs passed Apr 13 20:09:11.754214 ignition[779]: Ignition finished successfully Apr 13 20:09:11.762075 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:09:11.774608 ignition[785]: Ignition 2.19.0 Apr 13 20:09:11.774623 ignition[785]: Stage: disks Apr 13 20:09:11.774827 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:11.774868 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:11.777106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:09:11.775701 ignition[785]: disks: disks passed Apr 13 20:09:11.801124 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:09:11.775753 ignition[785]: Ignition finished successfully Apr 13 20:09:11.802269 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:09:11.804053 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:09:11.805937 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:09:11.807492 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:09:11.813317 systemd-networkd[770]: eth0: Gained IPv6LL Apr 13 20:09:11.815051 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:09:11.833316 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 20:09:11.837470 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:09:11.845956 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:09:11.952890 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:09:11.953959 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:09:11.955544 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:09:11.961954 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:09:11.965961 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:09:11.969174 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:09:11.970707 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:09:11.970761 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:09:11.983357 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (801) Apr 13 20:09:11.983410 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:11.983506 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:09:11.993664 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:11.993698 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:11.999046 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:09:12.005743 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:12.005776 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:12.009067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:09:12.063120 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:09:12.067965 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:09:12.073962 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:09:12.080886 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:09:12.173258 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:09:12.178933 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:09:12.183955 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:09:12.191360 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:09:12.196453 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:12.215806 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:09:12.218973 ignition[919]: INFO : Ignition 2.19.0 Apr 13 20:09:12.219948 ignition[919]: INFO : Stage: mount Apr 13 20:09:12.220635 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:12.220635 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:12.223318 ignition[919]: INFO : mount: mount passed Apr 13 20:09:12.223318 ignition[919]: INFO : Ignition finished successfully Apr 13 20:09:12.222372 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:09:12.228918 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:09:12.959032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:09:12.975002 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Apr 13 20:09:12.980285 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:12.980327 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:12.985395 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:12.992223 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:12.992265 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:12.994675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:09:13.018258 ignition[946]: INFO : Ignition 2.19.0 Apr 13 20:09:13.019538 ignition[946]: INFO : Stage: files Apr 13 20:09:13.019538 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:13.019538 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:13.023151 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:09:13.023151 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:09:13.023151 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:09:13.027005 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:09:13.027005 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:09:13.027005 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:09:13.027005 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:09:13.027005 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:09:13.024860 unknown[946]: wrote ssh authorized keys file for user: core Apr 13 20:09:13.218649 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:09:13.393575 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:09:13.413790 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 13 20:09:14.079936 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:09:14.393008 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:09:14.393008 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:09:14.397152 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:09:14.397152 ignition[946]: INFO : files: files passed Apr 13 20:09:14.397152 ignition[946]: INFO : Ignition finished successfully Apr 13 20:09:14.396874 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:09:14.421167 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:09:14.426992 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:09:14.429718 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:09:14.429829 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:09:14.443140 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:14.443140 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:14.446129 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:14.448692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:09:14.450975 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:09:14.462978 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:09:14.494399 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:09:14.494525 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:09:14.495565 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:09:14.496820 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:09:14.498583 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:09:14.500965 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:09:14.517207 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:09:14.527982 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:09:14.536343 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:09:14.537270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:09:14.539009 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:09:14.540639 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:09:14.540762 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:09:14.542976 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:09:14.545423 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:09:14.546199 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:09:14.547037 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:09:14.548508 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:09:14.550120 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:09:14.551724 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:09:14.553386 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:09:14.555113 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:09:14.556726 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:09:14.558330 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:09:14.558435 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:09:14.560211 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:09:14.561274 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:09:14.562733 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:09:14.562878 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:09:14.564360 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:09:14.564458 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:09:14.566605 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:09:14.566715 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:09:14.567758 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:09:14.567873 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:09:14.577351 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:09:14.578098 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:09:14.578258 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:09:14.580028 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:09:14.584539 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:09:14.584665 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:09:14.585532 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:09:14.585630 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:09:14.590616 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:09:14.590731 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:09:14.602846 ignition[999]: INFO : Ignition 2.19.0 Apr 13 20:09:14.602846 ignition[999]: INFO : Stage: umount Apr 13 20:09:14.602846 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:14.602846 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:14.602846 ignition[999]: INFO : umount: umount passed Apr 13 20:09:14.602846 ignition[999]: INFO : Ignition finished successfully Apr 13 20:09:14.604067 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:09:14.604201 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:09:14.605629 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:09:14.605709 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:09:14.607810 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:09:14.607920 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:09:14.611560 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:09:14.611613 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:09:14.612536 systemd[1]: Stopped target network.target - Network. Apr 13 20:09:14.613288 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:09:14.613344 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:09:14.614822 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:09:14.615526 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:09:14.619918 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:09:14.643071 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:09:14.644463 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:09:14.645937 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:09:14.645997 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:09:14.647518 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:09:14.647565 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:09:14.649169 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:09:14.649225 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:09:14.650601 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:09:14.650651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:09:14.652467 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:09:14.654379 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:09:14.657277 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:09:14.657889 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:09:14.658000 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:09:14.659322 systemd-networkd[770]: eth0: DHCPv6 lease lost Apr 13 20:09:14.662146 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:09:14.662263 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:09:14.664759 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:09:14.664912 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:09:14.668194 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:09:14.668245 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:09:14.669214 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:09:14.669271 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:09:14.675948 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:09:14.677089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:09:14.677146 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:09:14.680025 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:09:14.680077 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:09:14.681647 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:09:14.681698 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:09:14.683168 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:09:14.683216 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:09:14.684876 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:09:14.697188 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:09:14.697364 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:09:14.702533 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:09:14.702718 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:09:14.704592 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:09:14.704642 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:09:14.705829 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:09:14.705931 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:09:14.707491 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:09:14.707545 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:09:14.709747 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:09:14.709800 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:09:14.711386 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:09:14.711435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:14.720236 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:09:14.721959 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:09:14.722021 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:09:14.722811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:09:14.723799 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:14.726601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:09:14.726705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:09:14.731915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:09:14.739279 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:09:14.745167 systemd[1]: Switching root. Apr 13 20:09:14.784989 systemd-journald[178]: Journal stopped Apr 13 20:09:06.972750 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:09:06.972771 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:06.972780 kernel: BIOS-provided physical RAM map: Apr 13 20:09:06.972786 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 13 20:09:06.972792 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 13 20:09:06.972800 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 13 20:09:06.972807 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 13 20:09:06.972813 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 13 20:09:06.972818 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 13 20:09:06.972824 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 13 20:09:06.972843 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:09:06.972865 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 13 20:09:06.972871 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 13 20:09:06.972880 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 13 20:09:06.972887 kernel: NX (Execute Disable) protection: active Apr 13 20:09:06.972893 kernel: APIC: Static calls initialized Apr 13 20:09:06.972899 kernel: SMBIOS 2.8 present. Apr 13 20:09:06.972906 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 13 20:09:06.972912 kernel: Hypervisor detected: KVM Apr 13 20:09:06.972920 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:09:06.972926 kernel: kvm-clock: using sched offset of 5602788488 cycles Apr 13 20:09:06.972932 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:09:06.972938 kernel: tsc: Detected 1999.998 MHz processor Apr 13 20:09:06.972945 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:09:06.972951 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:09:06.972957 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 13 20:09:06.972964 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 13 20:09:06.972970 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:09:06.972978 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 13 20:09:06.972985 kernel: Using GB pages for direct mapping Apr 13 20:09:06.972991 kernel: ACPI: Early table checksum verification disabled Apr 13 20:09:06.972997 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 13 20:09:06.973003 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973010 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973016 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973022 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 13 20:09:06.973028 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973036 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973042 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973049 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:09:06.973058 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 13 20:09:06.973065 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 13 20:09:06.973071 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 13 20:09:06.973080 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 13 20:09:06.973087 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 13 20:09:06.973093 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 13 20:09:06.973100 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 13 20:09:06.973106 kernel: No NUMA configuration found Apr 13 20:09:06.973113 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 13 20:09:06.973119 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Apr 13 20:09:06.973125 kernel: Zone ranges: Apr 13 20:09:06.973134 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:09:06.973141 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:09:06.973147 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:09:06.973153 kernel: Movable zone start for each node Apr 13 20:09:06.973160 kernel: Early memory node ranges Apr 13 20:09:06.973166 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 13 20:09:06.973172 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 13 20:09:06.973179 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 13 20:09:06.973185 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 13 20:09:06.973191 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:09:06.973200 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 13 20:09:06.973207 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 13 20:09:06.973213 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:09:06.973219 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:09:06.973226 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:09:06.973232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:09:06.973239 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:09:06.973245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:09:06.973252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:09:06.973261 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:09:06.973267 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:09:06.973273 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:09:06.973280 kernel: TSC deadline timer available Apr 13 20:09:06.973286 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:09:06.973292 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:09:06.973299 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 13 20:09:06.973305 kernel: kvm-guest: setup PV sched yield Apr 13 20:09:06.973312 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 13 20:09:06.973320 kernel: Booting paravirtualized kernel on KVM Apr 13 20:09:06.973327 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:09:06.973334 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:09:06.973340 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:09:06.973346 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:09:06.973353 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:09:06.973359 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:09:06.973366 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:09:06.973373 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:06.973382 kernel: random: crng init done Apr 13 20:09:06.973388 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:09:06.973395 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:09:06.973401 kernel: Fallback order for Node 0: 0 Apr 13 20:09:06.973408 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 13 20:09:06.973414 kernel: Policy zone: Normal Apr 13 20:09:06.973421 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:09:06.973427 kernel: software IO TLB: area num 2. Apr 13 20:09:06.973436 kernel: Memory: 3966220K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 227292K reserved, 0K cma-reserved) Apr 13 20:09:06.973443 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:09:06.973449 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:09:06.973456 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:09:06.973462 kernel: Dynamic Preempt: voluntary Apr 13 20:09:06.973468 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:09:06.973475 kernel: rcu: RCU event tracing is enabled. Apr 13 20:09:06.973482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:09:06.973489 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:09:06.973498 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:09:06.973504 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:09:06.973511 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:09:06.973517 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:09:06.973524 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:09:06.973530 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:09:06.973536 kernel: Console: colour VGA+ 80x25 Apr 13 20:09:06.973543 kernel: printk: console [tty0] enabled Apr 13 20:09:06.973549 kernel: printk: console [ttyS0] enabled Apr 13 20:09:06.973558 kernel: ACPI: Core revision 20230628 Apr 13 20:09:06.973565 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 20:09:06.973571 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:09:06.973578 kernel: x2apic enabled Apr 13 20:09:06.973592 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:09:06.973601 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 13 20:09:06.973608 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 13 20:09:06.973615 kernel: kvm-guest: setup PV IPIs Apr 13 20:09:06.973621 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 20:09:06.973628 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 13 20:09:06.973634 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Apr 13 20:09:06.973641 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:09:06.973651 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 13 20:09:06.973657 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 13 20:09:06.973664 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:09:06.973671 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 20:09:06.973678 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:09:06.973687 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 13 20:09:06.973694 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:09:06.973701 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:09:06.973707 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 13 20:09:06.973715 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 13 20:09:06.973721 kernel: active return thunk: srso_alias_return_thunk Apr 13 20:09:06.973728 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 13 20:09:06.973735 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 13 20:09:06.973745 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:09:06.973751 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:09:06.973758 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:09:06.973765 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:09:06.973772 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:09:06.973779 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:09:06.973785 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 13 20:09:06.973792 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 13 20:09:06.973799 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:09:06.973808 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:09:06.973815 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:09:06.973822 kernel: landlock: Up and running. Apr 13 20:09:06.973829 kernel: SELinux: Initializing. Apr 13 20:09:06.973870 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:09:06.973877 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:09:06.973884 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 13 20:09:06.973891 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:06.973898 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:06.973908 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:09:06.973915 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 13 20:09:06.973922 kernel: ... version: 0 Apr 13 20:09:06.973928 kernel: ... bit width: 48 Apr 13 20:09:06.973935 kernel: ... generic registers: 6 Apr 13 20:09:06.973942 kernel: ... value mask: 0000ffffffffffff Apr 13 20:09:06.973949 kernel: ... max period: 00007fffffffffff Apr 13 20:09:06.973955 kernel: ... fixed-purpose events: 0 Apr 13 20:09:06.973962 kernel: ... event mask: 000000000000003f Apr 13 20:09:06.973971 kernel: signal: max sigframe size: 3376 Apr 13 20:09:06.973978 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:09:06.973985 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:09:06.973992 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:09:06.973998 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:09:06.974005 kernel: .... node #0, CPUs: #1 Apr 13 20:09:06.974012 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:09:06.974018 kernel: smpboot: Max logical packages: 1 Apr 13 20:09:06.974025 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Apr 13 20:09:06.974034 kernel: devtmpfs: initialized Apr 13 20:09:06.974041 kernel: x86/mm: Memory block size: 128MB Apr 13 20:09:06.974048 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:09:06.974055 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:09:06.974062 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:09:06.974068 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:09:06.974075 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:09:06.974082 kernel: audit: type=2000 audit(1776110946.233:1): state=initialized audit_enabled=0 res=1 Apr 13 20:09:06.974089 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:09:06.974098 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:09:06.974105 kernel: cpuidle: using governor menu Apr 13 20:09:06.974112 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:09:06.974118 kernel: dca service started, version 1.12.1 Apr 13 20:09:06.974125 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 13 20:09:06.974132 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 13 20:09:06.974138 kernel: PCI: Using configuration type 1 for base access Apr 13 20:09:06.974145 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:09:06.974152 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:09:06.974162 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:09:06.974168 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:09:06.974175 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:09:06.974182 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:09:06.974189 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:09:06.974196 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:09:06.974202 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:09:06.974209 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:09:06.974216 kernel: ACPI: Interpreter enabled Apr 13 20:09:06.974225 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:09:06.974232 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:09:06.974238 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:09:06.974245 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:09:06.974252 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:09:06.974259 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:09:06.974443 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:09:06.974586 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 20:09:06.974720 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 20:09:06.974730 kernel: PCI host bridge to bus 0000:00 Apr 13 20:09:06.974899 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:09:06.975024 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:09:06.975141 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:09:06.975255 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 13 20:09:06.975370 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 13 20:09:06.975492 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 13 20:09:06.975608 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:09:06.975751 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:09:06.975920 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 13 20:09:06.976052 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 13 20:09:06.976178 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 13 20:09:06.976333 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 13 20:09:06.976462 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:09:06.976599 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 13 20:09:06.976727 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:09:06.976881 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 13 20:09:06.977011 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 13 20:09:06.977146 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:09:06.977279 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:09:06.977446 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 13 20:09:06.977574 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 13 20:09:06.977700 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 13 20:09:06.977872 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:09:06.978010 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:09:06.978144 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:09:06.978275 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 13 20:09:06.978399 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 13 20:09:06.978532 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:09:06.978657 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 13 20:09:06.978667 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:09:06.978674 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:09:06.978681 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:09:06.978691 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:09:06.978698 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:09:06.978705 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:09:06.978711 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:09:06.978718 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:09:06.978725 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:09:06.978731 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:09:06.978739 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:09:06.978745 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:09:06.978755 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:09:06.978761 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:09:06.978768 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:09:06.978775 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:09:06.978782 kernel: iommu: Default domain type: Translated Apr 13 20:09:06.978788 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:09:06.978795 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:09:06.978802 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:09:06.978809 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 13 20:09:06.978818 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 13 20:09:06.979025 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:09:06.979152 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:09:06.979276 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:09:06.979286 kernel: vgaarb: loaded Apr 13 20:09:06.979293 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 20:09:06.979300 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 20:09:06.979306 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:09:06.979318 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:09:06.979325 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:09:06.979332 kernel: pnp: PnP ACPI init Apr 13 20:09:06.979466 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 13 20:09:06.979477 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:09:06.979484 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:09:06.979490 kernel: NET: Registered PF_INET protocol family Apr 13 20:09:06.979497 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:09:06.979508 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 20:09:06.979515 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:09:06.979522 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:09:06.979529 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 20:09:06.979535 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 20:09:06.979542 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:09:06.979549 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:09:06.979556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:09:06.979563 kernel: NET: Registered PF_XDP protocol family Apr 13 20:09:06.979681 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:09:06.979796 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:09:06.979967 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:09:06.980084 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 13 20:09:06.980199 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 13 20:09:06.980313 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 13 20:09:06.980323 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:09:06.980330 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:09:06.980341 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 13 20:09:06.980348 kernel: Initialise system trusted keyrings Apr 13 20:09:06.980355 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 20:09:06.980362 kernel: Key type asymmetric registered Apr 13 20:09:06.980368 kernel: Asymmetric key parser 'x509' registered Apr 13 20:09:06.980375 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:09:06.980382 kernel: io scheduler mq-deadline registered Apr 13 20:09:06.980389 kernel: io scheduler kyber registered Apr 13 20:09:06.980396 kernel: io scheduler bfq registered Apr 13 20:09:06.980402 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:09:06.980412 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:09:06.980419 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:09:06.980426 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:09:06.980433 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:09:06.980440 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:09:06.980447 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:09:06.980454 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:09:06.980582 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:09:06.980596 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:09:06.980713 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:09:06.980873 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:09:06 UTC (1776110946) Apr 13 20:09:06.981001 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 20:09:06.981012 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 13 20:09:06.981019 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:09:06.981025 kernel: Segment Routing with IPv6 Apr 13 20:09:06.981032 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:09:06.981043 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:09:06.981050 kernel: Key type dns_resolver registered Apr 13 20:09:06.981057 kernel: IPI shorthand broadcast: enabled Apr 13 20:09:06.981064 kernel: sched_clock: Marking stable (873005788, 315006365)->(1316800407, -128788254) Apr 13 20:09:06.981070 kernel: registered taskstats version 1 Apr 13 20:09:06.981077 kernel: Loading compiled-in X.509 certificates Apr 13 20:09:06.981084 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:09:06.981091 kernel: Key type .fscrypt registered Apr 13 20:09:06.981098 kernel: Key type fscrypt-provisioning registered Apr 13 20:09:06.981108 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:09:06.981115 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:09:06.981121 kernel: ima: No architecture policies found Apr 13 20:09:06.981128 kernel: clk: Disabling unused clocks Apr 13 20:09:06.981135 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:09:06.981142 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:09:06.981149 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:09:06.981155 kernel: Run /init as init process Apr 13 20:09:06.981162 kernel: with arguments: Apr 13 20:09:06.981171 kernel: /init Apr 13 20:09:06.981178 kernel: with environment: Apr 13 20:09:06.981185 kernel: HOME=/ Apr 13 20:09:06.981191 kernel: TERM=linux Apr 13 20:09:06.981200 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:09:06.981209 systemd[1]: Detected virtualization kvm. Apr 13 20:09:06.981216 systemd[1]: Detected architecture x86-64. Apr 13 20:09:06.981223 systemd[1]: Running in initrd. Apr 13 20:09:06.981233 systemd[1]: No hostname configured, using default hostname. Apr 13 20:09:06.981240 systemd[1]: Hostname set to . Apr 13 20:09:06.981247 systemd[1]: Initializing machine ID from random generator. Apr 13 20:09:06.981254 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:09:06.981262 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:09:06.981283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:09:06.981296 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:09:06.981304 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:09:06.981312 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:09:06.981319 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:09:06.981328 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:09:06.981336 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:09:06.981346 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:09:06.981353 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:09:06.981360 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:09:06.981368 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:09:06.981375 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:09:06.981383 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:09:06.981390 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:09:06.981397 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:09:06.981405 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:09:06.981415 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:09:06.981422 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:09:06.981430 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:09:06.981437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:09:06.981444 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:09:06.981452 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:09:06.981459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:09:06.981467 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:09:06.981474 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:09:06.981484 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:09:06.981491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:09:06.981518 systemd-journald[178]: Collecting audit messages is disabled. Apr 13 20:09:06.981535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:06.981547 systemd-journald[178]: Journal started Apr 13 20:09:06.981565 systemd-journald[178]: Runtime Journal (/run/log/journal/2f143277b8f548e58a5df3443cc36d46) is 8.0M, max 78.3M, 70.3M free. Apr 13 20:09:06.987944 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:09:06.987234 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:09:06.989279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:09:06.991381 systemd-modules-load[179]: Inserted module 'overlay' Apr 13 20:09:06.995032 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:09:06.999974 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:09:07.008975 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:09:07.021206 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:09:07.113363 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:09:07.113390 kernel: Bridge firewalling registered Apr 13 20:09:07.030746 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 13 20:09:07.115345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:09:07.116354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:07.118080 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:09:07.125950 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:07.127966 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:09:07.154977 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:09:07.158885 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:07.162988 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:09:07.164823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:09:07.165761 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:09:07.174398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:09:07.184902 dracut-cmdline[209]: dracut-dracut-053 Apr 13 20:09:07.188364 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:09:07.205128 systemd-resolved[212]: Positive Trust Anchors: Apr 13 20:09:07.205144 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:09:07.205171 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:09:07.208531 systemd-resolved[212]: Defaulting to hostname 'linux'. Apr 13 20:09:07.209734 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:09:07.210599 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:09:07.261883 kernel: SCSI subsystem initialized Apr 13 20:09:07.271856 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:09:07.285865 kernel: iscsi: registered transport (tcp) Apr 13 20:09:07.305988 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:09:07.306044 kernel: QLogic iSCSI HBA Driver Apr 13 20:09:07.351445 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:09:07.356994 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:09:07.383102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:09:07.383147 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:09:07.385209 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:09:07.427868 kernel: raid6: avx2x4 gen() 32639 MB/s Apr 13 20:09:07.445859 kernel: raid6: avx2x2 gen() 30074 MB/s Apr 13 20:09:07.464013 kernel: raid6: avx2x1 gen() 22873 MB/s Apr 13 20:09:07.464051 kernel: raid6: using algorithm avx2x4 gen() 32639 MB/s Apr 13 20:09:07.484179 kernel: raid6: .... xor() 5178 MB/s, rmw enabled Apr 13 20:09:07.484209 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:09:07.507861 kernel: xor: automatically using best checksumming function avx Apr 13 20:09:07.631867 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:09:07.643098 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:09:07.653002 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:09:07.664925 systemd-udevd[395]: Using default interface naming scheme 'v255'. Apr 13 20:09:07.669509 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:09:07.677135 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:09:07.692274 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 13 20:09:07.723692 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:09:07.727976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:09:07.798044 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:09:07.810136 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:09:07.826628 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:09:07.829783 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:09:07.832679 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:09:07.834377 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:09:07.845008 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:09:07.860803 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:09:07.883860 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:09:08.071957 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:09:08.078677 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 20:09:08.125512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:09:08.125654 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:08.128476 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:08.129355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:09:08.129505 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:08.135585 kernel: libata version 3.00 loaded. Apr 13 20:09:08.130487 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:08.140214 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:08.159865 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:09:08.160127 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:09:08.170878 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:09:08.171103 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:09:08.176888 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:09:08.176917 kernel: AES CTR mode by8 optimization enabled Apr 13 20:09:08.183869 kernel: scsi host1: ahci Apr 13 20:09:08.186872 kernel: scsi host2: ahci Apr 13 20:09:08.190876 kernel: scsi host3: ahci Apr 13 20:09:08.191192 kernel: scsi host4: ahci Apr 13 20:09:08.194893 kernel: scsi host5: ahci Apr 13 20:09:08.197254 kernel: scsi host6: ahci Apr 13 20:09:08.198297 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 13 20:09:08.198327 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 13 20:09:08.198345 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 13 20:09:08.198363 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 13 20:09:08.198379 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 13 20:09:08.198395 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 13 20:09:08.304326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:08.311022 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:09:08.329090 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:08.515864 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.515929 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.515942 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.515952 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.516860 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.518868 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:09:08.536565 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 13 20:09:08.561773 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 13 20:09:08.562080 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 20:09:08.564058 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 13 20:09:08.564310 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:09:08.573643 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:09:08.573680 kernel: GPT:9289727 != 167739391 Apr 13 20:09:08.573692 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:09:08.577849 kernel: GPT:9289727 != 167739391 Apr 13 20:09:08.577879 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:09:08.582248 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:08.584056 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 20:09:08.619881 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (463) Apr 13 20:09:08.623887 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (445) Apr 13 20:09:08.627850 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 20:09:08.637000 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 20:09:08.643636 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 20:09:08.645378 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 20:09:08.651606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:09:08.666027 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:09:08.671710 disk-uuid[568]: Primary Header is updated. Apr 13 20:09:08.671710 disk-uuid[568]: Secondary Entries is updated. Apr 13 20:09:08.671710 disk-uuid[568]: Secondary Header is updated. Apr 13 20:09:08.677886 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:08.684873 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:09.688930 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:09:09.689905 disk-uuid[569]: The operation has completed successfully. Apr 13 20:09:09.745453 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:09:09.745627 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:09:09.771012 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:09:09.775680 sh[583]: Success Apr 13 20:09:09.791859 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 20:09:09.838567 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:09:09.846941 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:09:09.848063 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:09:09.866439 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:09:09.866474 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:09.869405 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:09:09.874623 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:09:09.874651 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:09:09.884858 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:09:09.887120 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:09:09.888448 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:09:09.893963 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:09:09.897114 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:09:09.909865 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:09.915807 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:09.915856 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:09.926162 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:09.926193 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:09.937380 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:09:09.941273 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:09.948320 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:09:09.956464 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:09:10.034918 ignition[685]: Ignition 2.19.0 Apr 13 20:09:10.034931 ignition[685]: Stage: fetch-offline Apr 13 20:09:10.034985 ignition[685]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:10.039369 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:09:10.034997 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:10.035082 ignition[685]: parsed url from cmdline: "" Apr 13 20:09:10.035086 ignition[685]: no config URL provided Apr 13 20:09:10.035092 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:09:10.035101 ignition[685]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:09:10.035107 ignition[685]: failed to fetch config: resource requires networking Apr 13 20:09:10.035305 ignition[685]: Ignition finished successfully Apr 13 20:09:10.046633 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:09:10.055045 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:09:10.078699 systemd-networkd[770]: lo: Link UP Apr 13 20:09:10.078717 systemd-networkd[770]: lo: Gained carrier Apr 13 20:09:10.081246 systemd-networkd[770]: Enumeration completed Apr 13 20:09:10.081811 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:10.081817 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:09:10.083488 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:09:10.084951 systemd-networkd[770]: eth0: Link UP Apr 13 20:09:10.084959 systemd-networkd[770]: eth0: Gained carrier Apr 13 20:09:10.084970 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:10.087451 systemd[1]: Reached target network.target - Network. Apr 13 20:09:10.097050 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:09:10.111711 ignition[772]: Ignition 2.19.0 Apr 13 20:09:10.111725 ignition[772]: Stage: fetch Apr 13 20:09:10.111926 ignition[772]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:10.111942 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:10.112048 ignition[772]: parsed url from cmdline: "" Apr 13 20:09:10.112054 ignition[772]: no config URL provided Apr 13 20:09:10.112062 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:09:10.112076 ignition[772]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:09:10.112100 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 13 20:09:10.112289 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:10.312489 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 13 20:09:10.312656 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:10.713316 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 13 20:09:10.713480 ignition[772]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:09:10.815917 systemd-networkd[770]: eth0: DHCPv4 address 172.239.193.192/24, gateway 172.239.193.1 acquired from 23.205.167.133 Apr 13 20:09:11.514245 ignition[772]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 13 20:09:11.610602 ignition[772]: PUT result: OK Apr 13 20:09:11.610672 ignition[772]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 13 20:09:11.724943 ignition[772]: GET result: OK Apr 13 20:09:11.725940 ignition[772]: parsing config with SHA512: 8f3b1e6f26b8406dfc01fd819f0bcd4a369f54b0817ead7d6611553cf41df993ffccc22b18f612f49c842c4cda0fc588852859f16f1a4d05df0f4fea72bef299 Apr 13 20:09:11.728960 unknown[772]: fetched base config from "system" Apr 13 20:09:11.728970 unknown[772]: fetched base config from "system" Apr 13 20:09:11.729218 ignition[772]: fetch: fetch complete Apr 13 20:09:11.728976 unknown[772]: fetched user config from "akamai" Apr 13 20:09:11.729224 ignition[772]: fetch: fetch passed Apr 13 20:09:11.729268 ignition[772]: Ignition finished successfully Apr 13 20:09:11.733944 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:09:11.739980 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:09:11.753300 ignition[779]: Ignition 2.19.0 Apr 13 20:09:11.753316 ignition[779]: Stage: kargs Apr 13 20:09:11.753474 ignition[779]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:11.753485 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:11.755434 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:09:11.754171 ignition[779]: kargs: kargs passed Apr 13 20:09:11.754214 ignition[779]: Ignition finished successfully Apr 13 20:09:11.762075 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:09:11.774608 ignition[785]: Ignition 2.19.0 Apr 13 20:09:11.774623 ignition[785]: Stage: disks Apr 13 20:09:11.774827 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:11.774868 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:11.777106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:09:11.775701 ignition[785]: disks: disks passed Apr 13 20:09:11.801124 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:09:11.775753 ignition[785]: Ignition finished successfully Apr 13 20:09:11.802269 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:09:11.804053 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:09:11.805937 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:09:11.807492 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:09:11.813317 systemd-networkd[770]: eth0: Gained IPv6LL Apr 13 20:09:11.815051 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:09:11.833316 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 20:09:11.837470 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:09:11.845956 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:09:11.952890 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:09:11.953959 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:09:11.955544 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:09:11.961954 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:09:11.965961 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:09:11.969174 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:09:11.970707 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:09:11.970761 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:09:11.983357 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (801) Apr 13 20:09:11.983410 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:11.983506 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:09:11.993664 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:11.993698 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:11.999046 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:09:12.005743 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:12.005776 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:12.009067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:09:12.063120 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:09:12.067965 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:09:12.073962 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:09:12.080886 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:09:12.173258 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:09:12.178933 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:09:12.183955 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:09:12.191360 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:09:12.196453 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:12.215806 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:09:12.218973 ignition[919]: INFO : Ignition 2.19.0 Apr 13 20:09:12.219948 ignition[919]: INFO : Stage: mount Apr 13 20:09:12.220635 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:12.220635 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:12.223318 ignition[919]: INFO : mount: mount passed Apr 13 20:09:12.223318 ignition[919]: INFO : Ignition finished successfully Apr 13 20:09:12.222372 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:09:12.228918 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:09:12.959032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:09:12.975002 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Apr 13 20:09:12.980285 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:09:12.980327 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:09:12.985395 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:09:12.992223 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:09:12.992265 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:09:12.994675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:09:13.018258 ignition[946]: INFO : Ignition 2.19.0 Apr 13 20:09:13.019538 ignition[946]: INFO : Stage: files Apr 13 20:09:13.019538 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:13.019538 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:13.023151 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:09:13.023151 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:09:13.023151 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:09:13.027005 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:09:13.027005 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:09:13.027005 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:09:13.027005 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:09:13.027005 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:09:13.024860 unknown[946]: wrote ssh authorized keys file for user: core Apr 13 20:09:13.218649 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:09:13.393575 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:09:13.395201 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:09:13.413790 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 13 20:09:14.079936 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:09:14.393008 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:09:14.393008 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:09:14.397152 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:09:14.397152 ignition[946]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:09:14.397152 ignition[946]: INFO : files: files passed Apr 13 20:09:14.397152 ignition[946]: INFO : Ignition finished successfully Apr 13 20:09:14.396874 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:09:14.421167 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:09:14.426992 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:09:14.429718 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:09:14.429829 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:09:14.443140 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:14.443140 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:14.446129 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:09:14.448692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:09:14.450975 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:09:14.462978 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:09:14.494399 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:09:14.494525 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:09:14.495565 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:09:14.496820 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:09:14.498583 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:09:14.500965 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:09:14.517207 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:09:14.527982 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:09:14.536343 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:09:14.537270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:09:14.539009 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:09:14.540639 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:09:14.540762 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:09:14.542976 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:09:14.545423 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:09:14.546199 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:09:14.547037 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:09:14.548508 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:09:14.550120 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:09:14.551724 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:09:14.553386 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:09:14.555113 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:09:14.556726 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:09:14.558330 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:09:14.558435 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:09:14.560211 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:09:14.561274 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:09:14.562733 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:09:14.562878 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:09:14.564360 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:09:14.564458 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:09:14.566605 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:09:14.566715 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:09:14.567758 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:09:14.567873 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:09:14.577351 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:09:14.578098 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:09:14.578258 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:09:14.580028 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:09:14.584539 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:09:14.584665 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:09:14.585532 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:09:14.585630 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:09:14.590616 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:09:14.590731 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:09:14.602846 ignition[999]: INFO : Ignition 2.19.0 Apr 13 20:09:14.602846 ignition[999]: INFO : Stage: umount Apr 13 20:09:14.602846 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:09:14.602846 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 13 20:09:14.602846 ignition[999]: INFO : umount: umount passed Apr 13 20:09:14.602846 ignition[999]: INFO : Ignition finished successfully Apr 13 20:09:14.604067 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:09:14.604201 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:09:14.605629 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:09:14.605709 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:09:14.607810 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:09:14.607920 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:09:14.611560 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:09:14.611613 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:09:14.612536 systemd[1]: Stopped target network.target - Network. Apr 13 20:09:14.613288 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:09:14.613344 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:09:14.614822 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:09:14.615526 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:09:14.619918 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:09:14.643071 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:09:14.644463 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:09:14.645937 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:09:14.645997 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:09:14.647518 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:09:14.647565 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:09:14.649169 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:09:14.649225 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:09:14.650601 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:09:14.650651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:09:14.652467 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:09:14.654379 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:09:14.657277 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:09:14.657889 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:09:14.658000 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:09:14.659322 systemd-networkd[770]: eth0: DHCPv6 lease lost Apr 13 20:09:14.662146 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:09:14.662263 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:09:14.664759 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:09:14.664912 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:09:14.668194 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:09:14.668245 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:09:14.669214 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:09:14.669271 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:09:14.675948 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:09:14.677089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:09:14.677146 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:09:14.680025 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:09:14.680077 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:09:14.681647 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:09:14.681698 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:09:14.683168 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:09:14.683216 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:09:14.684876 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:09:14.697188 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:09:14.697364 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:09:14.702533 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:09:14.702718 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:09:14.704592 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:09:14.704642 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:09:14.705829 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:09:14.705931 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:09:14.707491 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:09:14.707545 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:09:14.709747 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:09:14.709800 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:09:14.711386 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:09:14.711435 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:09:14.720236 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:09:14.721959 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:09:14.722021 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:09:14.722811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:09:14.723799 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:14.726601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:09:14.726705 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:09:14.731915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:09:14.739279 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:09:14.745167 systemd[1]: Switching root. Apr 13 20:09:14.784989 systemd-journald[178]: Journal stopped Apr 13 20:09:15.962761 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 13 20:09:15.962786 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:09:15.962799 kernel: SELinux: policy capability open_perms=1 Apr 13 20:09:15.962808 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:09:15.962821 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:09:15.962830 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:09:15.962854 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:09:15.962864 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:09:15.962873 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:09:15.962882 kernel: audit: type=1403 audit(1776110954.948:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:09:15.962892 systemd[1]: Successfully loaded SELinux policy in 50.507ms. Apr 13 20:09:15.962906 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.712ms. Apr 13 20:09:15.962917 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:09:15.962928 systemd[1]: Detected virtualization kvm. Apr 13 20:09:15.962938 systemd[1]: Detected architecture x86-64. Apr 13 20:09:15.962948 systemd[1]: Detected first boot. Apr 13 20:09:15.962960 systemd[1]: Initializing machine ID from random generator. Apr 13 20:09:15.962970 zram_generator::config[1041]: No configuration found. Apr 13 20:09:15.962983 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:09:15.962993 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:09:15.963003 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:09:15.963013 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:09:15.963023 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:09:15.963036 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:09:15.963046 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:09:15.963056 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:09:15.963066 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:09:15.963076 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:09:15.963086 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:09:15.963096 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:09:15.963108 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:09:15.963118 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:09:15.963128 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:09:15.963138 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:09:15.963148 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:09:15.963158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:09:15.963168 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:09:15.963178 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:09:15.963190 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:09:15.963201 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:09:15.963214 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:09:15.963225 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:09:15.963235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:09:15.963245 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:09:15.963255 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:09:15.963265 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:09:15.963278 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:09:15.963288 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:09:15.963298 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:09:15.963308 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:09:15.963318 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:09:15.963331 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:09:15.963341 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:09:15.963351 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:09:15.963362 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:09:15.963372 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:15.963382 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:09:15.963392 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:09:15.963402 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:09:15.963415 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:09:15.963426 systemd[1]: Reached target machines.target - Containers. Apr 13 20:09:15.963436 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:09:15.963447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:09:15.963457 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:09:15.963467 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:09:15.963477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:09:15.963487 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:09:15.963500 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:09:15.963510 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:09:15.963520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:09:15.963531 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:09:15.963541 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:09:15.963551 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:09:15.963561 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:09:15.963571 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:09:15.963584 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:09:15.963593 kernel: loop: module loaded Apr 13 20:09:15.963603 kernel: fuse: init (API version 7.39) Apr 13 20:09:15.963612 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:09:15.963623 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:09:15.963633 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:09:15.963644 kernel: ACPI: bus type drm_connector registered Apr 13 20:09:15.963672 systemd-journald[1131]: Collecting audit messages is disabled. Apr 13 20:09:15.963694 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:09:15.963705 systemd-journald[1131]: Journal started Apr 13 20:09:15.963724 systemd-journald[1131]: Runtime Journal (/run/log/journal/a9651cfbd2264ae69d18e74a3c4d5550) is 8.0M, max 78.3M, 70.3M free. Apr 13 20:09:15.565733 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:09:15.582585 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 20:09:15.583120 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:09:15.972912 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:09:15.972949 systemd[1]: Stopped verity-setup.service. Apr 13 20:09:15.972965 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:15.978827 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:09:15.980445 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:09:15.981372 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:09:15.982306 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:09:15.983201 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:09:15.984144 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:09:15.985083 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:09:15.986160 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:09:15.987286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:09:15.988549 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:09:15.988766 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:09:15.989930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:09:15.990159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:09:15.991334 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:09:15.991540 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:09:15.992770 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:09:15.993103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:09:15.994278 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:09:15.994500 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:09:15.995610 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:09:15.995825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:09:15.997005 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:09:15.998330 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:09:15.999449 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:09:16.014491 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:09:16.022431 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:09:16.026909 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:09:16.029168 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:09:16.029201 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:09:16.031287 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:09:16.035021 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:09:16.042948 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:09:16.066397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:09:16.074157 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:09:16.075982 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:09:16.076815 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:09:16.077980 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:09:16.078783 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:09:16.086013 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:09:16.100014 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:09:16.110658 systemd-journald[1131]: Time spent on flushing to /var/log/journal/a9651cfbd2264ae69d18e74a3c4d5550 is 38.736ms for 971 entries. Apr 13 20:09:16.110658 systemd-journald[1131]: System Journal (/var/log/journal/a9651cfbd2264ae69d18e74a3c4d5550) is 8.0M, max 195.6M, 187.6M free. Apr 13 20:09:16.191539 systemd-journald[1131]: Received client request to flush runtime journal. Apr 13 20:09:16.197779 kernel: loop0: detected capacity change from 0 to 8 Apr 13 20:09:16.104981 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:09:16.113930 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:09:16.114974 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:09:16.212824 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:09:16.116584 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:09:16.119040 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:09:16.134032 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:09:16.137247 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:09:16.141974 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:09:16.155341 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:09:16.188672 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:09:16.196742 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 20:09:16.204399 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:09:16.211936 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:09:16.214703 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:09:16.238876 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:09:16.250180 kernel: loop1: detected capacity change from 0 to 140768 Apr 13 20:09:16.248634 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:09:16.299867 kernel: loop2: detected capacity change from 0 to 217752 Apr 13 20:09:16.314168 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Apr 13 20:09:16.314189 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Apr 13 20:09:16.321633 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:09:16.352861 kernel: loop3: detected capacity change from 0 to 142488 Apr 13 20:09:16.402144 kernel: loop4: detected capacity change from 0 to 8 Apr 13 20:09:16.410037 kernel: loop5: detected capacity change from 0 to 140768 Apr 13 20:09:16.435707 kernel: loop6: detected capacity change from 0 to 217752 Apr 13 20:09:16.457870 kernel: loop7: detected capacity change from 0 to 142488 Apr 13 20:09:16.482575 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 13 20:09:16.485150 (sd-merge)[1186]: Merged extensions into '/usr'. Apr 13 20:09:16.495934 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:09:16.495951 systemd[1]: Reloading... Apr 13 20:09:16.572860 zram_generator::config[1208]: No configuration found. Apr 13 20:09:16.642767 ldconfig[1156]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:09:16.729065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:16.771495 systemd[1]: Reloading finished in 275 ms. Apr 13 20:09:16.803800 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:09:16.805127 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:09:16.806210 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:09:16.821022 systemd[1]: Starting ensure-sysext.service... Apr 13 20:09:16.825961 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:09:16.832017 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:09:16.835026 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:09:16.835040 systemd[1]: Reloading... Apr 13 20:09:16.846178 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:09:16.846548 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:09:16.847650 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:09:16.847956 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Apr 13 20:09:16.848034 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Apr 13 20:09:16.851851 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:09:16.851859 systemd-tmpfiles[1257]: Skipping /boot Apr 13 20:09:16.864161 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:09:16.864238 systemd-tmpfiles[1257]: Skipping /boot Apr 13 20:09:16.888612 systemd-udevd[1258]: Using default interface naming scheme 'v255'. Apr 13 20:09:16.927924 zram_generator::config[1284]: No configuration found. Apr 13 20:09:17.128918 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 13 20:09:17.142871 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1293) Apr 13 20:09:17.152904 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:09:17.157872 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 20:09:17.162652 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 20:09:17.162925 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 20:09:17.194720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:17.253876 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 13 20:09:17.277878 kernel: EDAC MC: Ver: 3.0.0 Apr 13 20:09:17.279883 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:09:17.284967 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:09:17.286496 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:09:17.286576 systemd[1]: Reloading finished in 451 ms. Apr 13 20:09:17.300410 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:09:17.306263 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:09:17.321105 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:09:17.331165 systemd[1]: Finished ensure-sysext.service. Apr 13 20:09:17.345029 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:17.351075 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:09:17.358279 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:09:17.359206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:09:17.363829 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:09:17.372863 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:09:17.376416 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:09:17.381013 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:09:17.391030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:09:17.392211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:09:17.396310 lvm[1365]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:09:17.396802 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:09:17.407271 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:09:17.412004 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:09:17.431946 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:09:17.444036 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 20:09:17.453421 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:09:17.458415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:09:17.460447 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:09:17.463931 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:09:17.465369 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:09:17.465594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:09:17.468465 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:09:17.468698 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:09:17.471356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:09:17.471555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:09:17.473730 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:09:17.473998 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:09:17.475270 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:09:17.486489 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:09:17.500749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:09:17.514428 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:09:17.515574 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:09:17.515687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:09:17.518976 augenrules[1402]: No rules Apr 13 20:09:17.518668 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:09:17.522785 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:09:17.524978 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:09:17.525387 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:09:17.527989 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:09:17.559935 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:09:17.560988 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:09:17.562744 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:09:17.568143 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:09:17.605203 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:09:17.684332 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:09:17.718118 systemd-networkd[1379]: lo: Link UP Apr 13 20:09:17.718127 systemd-networkd[1379]: lo: Gained carrier Apr 13 20:09:17.720017 systemd-networkd[1379]: Enumeration completed Apr 13 20:09:17.720355 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:09:17.722291 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:17.722303 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:09:17.724070 systemd-networkd[1379]: eth0: Link UP Apr 13 20:09:17.724082 systemd-networkd[1379]: eth0: Gained carrier Apr 13 20:09:17.724095 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:09:17.728901 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:09:17.734952 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 20:09:17.735801 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:09:17.739312 systemd-resolved[1380]: Positive Trust Anchors: Apr 13 20:09:17.739583 systemd-resolved[1380]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:09:17.739653 systemd-resolved[1380]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:09:17.744192 systemd-resolved[1380]: Defaulting to hostname 'linux'. Apr 13 20:09:17.746167 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:09:17.747023 systemd[1]: Reached target network.target - Network. Apr 13 20:09:17.747758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:09:17.748555 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:09:17.749416 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:09:17.750278 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:09:17.751340 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:09:17.752308 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:09:17.753125 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:09:17.753925 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:09:17.753959 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:09:17.754649 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:09:17.755919 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:09:17.758358 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:09:17.764762 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:09:17.766137 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:09:17.766991 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:09:17.767720 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:09:17.768478 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:09:17.768517 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:09:17.769562 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:09:17.772983 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:09:17.777604 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:09:17.780944 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:09:17.786575 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:09:17.787371 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:09:17.789981 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:09:17.801064 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:09:17.807953 jq[1431]: false Apr 13 20:09:17.808499 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:09:17.814291 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:09:17.826179 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:09:17.828341 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:09:17.828819 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:09:17.830725 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:09:17.835006 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:09:17.844462 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:09:17.845946 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:09:17.870373 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:09:17.870587 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:09:17.895198 dbus-daemon[1430]: [system] SELinux support is enabled Apr 13 20:09:17.901131 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:09:17.905514 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:09:17.905549 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:09:17.906394 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:09:17.906415 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:09:17.912633 tar[1443]: linux-amd64/LICENSE Apr 13 20:09:17.912633 tar[1443]: linux-amd64/helm Apr 13 20:09:17.914230 jq[1441]: true Apr 13 20:09:17.914799 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:09:17.927274 extend-filesystems[1432]: Found loop4 Apr 13 20:09:17.927274 extend-filesystems[1432]: Found loop5 Apr 13 20:09:17.927274 extend-filesystems[1432]: Found loop6 Apr 13 20:09:17.927274 extend-filesystems[1432]: Found loop7 Apr 13 20:09:17.927274 extend-filesystems[1432]: Found sda Apr 13 20:09:17.927274 extend-filesystems[1432]: Found sda1 Apr 13 20:09:17.927274 extend-filesystems[1432]: Found sda2 Apr 13 20:09:17.927274 extend-filesystems[1432]: Found sda3 Apr 13 20:09:17.960480 update_engine[1440]: I20260413 20:09:17.929652 1440 main.cc:92] Flatcar Update Engine starting Apr 13 20:09:17.960480 update_engine[1440]: I20260413 20:09:17.959427 1440 update_check_scheduler.cc:74] Next update check in 7m15s Apr 13 20:09:17.929181 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 20:09:17.962392 extend-filesystems[1432]: Found usr Apr 13 20:09:17.962392 extend-filesystems[1432]: Found sda4 Apr 13 20:09:17.962392 extend-filesystems[1432]: Found sda6 Apr 13 20:09:17.962392 extend-filesystems[1432]: Found sda7 Apr 13 20:09:17.962392 extend-filesystems[1432]: Found sda9 Apr 13 20:09:17.962392 extend-filesystems[1432]: Checking size of /dev/sda9 Apr 13 20:09:17.984178 jq[1463]: true Apr 13 20:09:17.931889 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:09:17.985562 coreos-metadata[1429]: Apr 13 20:09:17.969 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 13 20:09:17.985791 extend-filesystems[1432]: Resized partition /dev/sda9 Apr 13 20:09:18.005080 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 13 20:09:17.934491 systemd-logind[1439]: New seat seat0. Apr 13 20:09:18.005217 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:09:17.939482 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:09:17.950176 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:09:17.959010 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:09:17.962094 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:09:17.962766 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:09:18.106433 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:09:18.107526 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:09:18.120142 systemd[1]: Starting sshkeys.service... Apr 13 20:09:18.128864 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1306) Apr 13 20:09:18.180120 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:09:18.193240 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:09:18.195640 containerd[1455]: time="2026-04-13T20:09:18.195570653Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:09:18.266575 containerd[1455]: time="2026-04-13T20:09:18.266476924Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:18.281187 containerd[1455]: time="2026-04-13T20:09:18.281143898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:09:18.281187 containerd[1455]: time="2026-04-13T20:09:18.281179728Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:09:18.281289 containerd[1455]: time="2026-04-13T20:09:18.281197038Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:09:18.281392 containerd[1455]: time="2026-04-13T20:09:18.281369339Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:09:18.281436 containerd[1455]: time="2026-04-13T20:09:18.281392019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:18.281891 containerd[1455]: time="2026-04-13T20:09:18.281468309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:09:18.281891 containerd[1455]: time="2026-04-13T20:09:18.281481109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:18.281891 containerd[1455]: time="2026-04-13T20:09:18.281663069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:09:18.281891 containerd[1455]: time="2026-04-13T20:09:18.281677979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:18.281891 containerd[1455]: time="2026-04-13T20:09:18.281690539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:09:18.281891 containerd[1455]: time="2026-04-13T20:09:18.281702779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:18.281891 containerd[1455]: time="2026-04-13T20:09:18.281791339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:18.289598 containerd[1455]: time="2026-04-13T20:09:18.289120166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:09:18.289598 containerd[1455]: time="2026-04-13T20:09:18.289277527Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:09:18.289598 containerd[1455]: time="2026-04-13T20:09:18.289292417Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:09:18.289598 containerd[1455]: time="2026-04-13T20:09:18.289390157Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:09:18.289598 containerd[1455]: time="2026-04-13T20:09:18.289448297Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:09:18.295552 coreos-metadata[1496]: Apr 13 20:09:18.295 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 13 20:09:18.299391 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:09:18.300711 containerd[1455]: time="2026-04-13T20:09:18.300577628Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:09:18.300711 containerd[1455]: time="2026-04-13T20:09:18.300620148Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:09:18.300711 containerd[1455]: time="2026-04-13T20:09:18.300635658Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:09:18.300711 containerd[1455]: time="2026-04-13T20:09:18.300655238Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:09:18.300711 containerd[1455]: time="2026-04-13T20:09:18.300668198Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:09:18.300828 containerd[1455]: time="2026-04-13T20:09:18.300792608Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:09:18.301054 containerd[1455]: time="2026-04-13T20:09:18.301019088Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301128558Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301146938Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301158308Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301180668Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301196108Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301206328Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301218528Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301231708Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301243479Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301254349Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301264329Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301281079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301292579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302680 containerd[1455]: time="2026-04-13T20:09:18.301303299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301315609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301326159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301345599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301356359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301372619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301383569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301397389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301407659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301418669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301428909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301442049Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301459139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301469309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.302945 containerd[1455]: time="2026-04-13T20:09:18.301479389Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:09:18.303163 containerd[1455]: time="2026-04-13T20:09:18.301522499Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:09:18.303163 containerd[1455]: time="2026-04-13T20:09:18.301537229Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:09:18.303163 containerd[1455]: time="2026-04-13T20:09:18.301546579Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:09:18.303163 containerd[1455]: time="2026-04-13T20:09:18.301557379Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:09:18.303163 containerd[1455]: time="2026-04-13T20:09:18.301565869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.303163 containerd[1455]: time="2026-04-13T20:09:18.301576779Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:09:18.303163 containerd[1455]: time="2026-04-13T20:09:18.301585789Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:09:18.303163 containerd[1455]: time="2026-04-13T20:09:18.301594809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.301796859Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.301877859Z" level=info msg="Connect containerd service" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.301915859Z" level=info msg="using legacy CRI server" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.301922429Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.301993779Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.302503010Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.302816830Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.302891800Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.302974560Z" level=info msg="Start subscribing containerd event" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.303011210Z" level=info msg="Start recovering state" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.303061970Z" level=info msg="Start event monitor" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.303071230Z" level=info msg="Start snapshots syncer" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.303079200Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.303086590Z" level=info msg="Start streaming server" Apr 13 20:09:18.303294 containerd[1455]: time="2026-04-13T20:09:18.303136220Z" level=info msg="containerd successfully booted in 0.110481s" Apr 13 20:09:18.303944 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:09:18.329595 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:09:18.341865 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 13 20:09:18.355814 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:09:18.357163 extend-filesystems[1471]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 20:09:18.357163 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 13 20:09:18.357163 extend-filesystems[1471]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 13 20:09:18.368303 extend-filesystems[1432]: Resized filesystem in /dev/sda9 Apr 13 20:09:18.367130 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:09:18.369443 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:09:18.369701 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:09:18.386942 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:09:18.387163 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:09:18.396073 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:09:18.407937 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:09:18.417375 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:09:18.421172 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:09:18.424161 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:09:18.543006 dbus-daemon[1430]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1379 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 20:09:18.543899 systemd-networkd[1379]: eth0: DHCPv4 address 172.239.193.192/24, gateway 172.239.193.1 acquired from 23.205.167.133 Apr 13 20:09:18.546392 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Apr 13 20:09:18.556991 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 20:09:18.624179 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 20:09:18.624457 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 20:09:18.625495 dbus-daemon[1430]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1528 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 20:09:18.635579 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 20:09:18.643954 polkitd[1529]: Started polkitd version 121 Apr 13 20:09:18.648242 polkitd[1529]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 20:09:18.648530 polkitd[1529]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 20:09:18.649734 polkitd[1529]: Finished loading, compiling and executing 2 rules Apr 13 20:09:18.650181 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 20:09:18.650432 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 20:09:19.517257 systemd-resolved[1380]: Clock change detected. Flushing caches. Apr 13 20:09:19.517354 systemd-timesyncd[1384]: Contacted time server 44.190.5.123:123 (0.flatcar.pool.ntp.org). Apr 13 20:09:19.517404 systemd-timesyncd[1384]: Initial clock synchronization to Mon 2026-04-13 20:09:19.517206 UTC. Apr 13 20:09:19.519220 polkitd[1529]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 20:09:19.528606 systemd-hostnamed[1528]: Hostname set to <172-239-193-192> (transient) Apr 13 20:09:19.528714 systemd-resolved[1380]: System hostname changed to '172-239-193-192'. Apr 13 20:09:19.543404 tar[1443]: linux-amd64/README.md Apr 13 20:09:19.555412 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:09:19.847602 coreos-metadata[1429]: Apr 13 20:09:19.847 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 13 20:09:19.941962 coreos-metadata[1429]: Apr 13 20:09:19.941 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 13 20:09:20.103478 systemd-networkd[1379]: eth0: Gained IPv6LL Apr 13 20:09:20.107620 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:09:20.109665 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:09:20.122676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:20.155301 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:09:20.171561 coreos-metadata[1429]: Apr 13 20:09:20.171 INFO Fetch successful Apr 13 20:09:20.171678 coreos-metadata[1429]: Apr 13 20:09:20.171 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 13 20:09:20.182720 coreos-metadata[1496]: Apr 13 20:09:20.182 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 13 20:09:20.195124 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:09:20.276584 coreos-metadata[1496]: Apr 13 20:09:20.276 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 13 20:09:20.408798 coreos-metadata[1496]: Apr 13 20:09:20.408 INFO Fetch successful Apr 13 20:09:20.425494 update-ssh-keys[1556]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:09:20.426380 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:09:20.428694 systemd[1]: Finished sshkeys.service. Apr 13 20:09:20.479191 coreos-metadata[1429]: Apr 13 20:09:20.477 INFO Fetch successful Apr 13 20:09:20.591320 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:09:20.592618 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:09:21.095864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:21.097145 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:09:21.099964 systemd[1]: Startup finished in 1.009s (kernel) + 8.230s (initrd) + 5.334s (userspace) = 14.574s. Apr 13 20:09:21.139312 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:09:21.587821 kubelet[1584]: E0413 20:09:21.587682 1584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:09:21.591572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:09:21.591785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:09:22.326145 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:09:22.331085 systemd[1]: Started sshd@0-172.239.193.192:22-50.85.169.122:33616.service - OpenSSH per-connection server daemon (50.85.169.122:33616). Apr 13 20:09:23.047081 sshd[1596]: Accepted publickey for core from 50.85.169.122 port 33616 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:23.049188 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:23.057278 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:09:23.069256 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:09:23.072018 systemd-logind[1439]: New session 1 of user core. Apr 13 20:09:23.082776 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:09:23.089328 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:09:23.099530 (systemd)[1600]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:09:23.254534 systemd[1600]: Queued start job for default target default.target. Apr 13 20:09:23.265353 systemd[1600]: Created slice app.slice - User Application Slice. Apr 13 20:09:23.265402 systemd[1600]: Reached target paths.target - Paths. Apr 13 20:09:23.265423 systemd[1600]: Reached target timers.target - Timers. Apr 13 20:09:23.267941 systemd[1600]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:09:23.286309 systemd[1600]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:09:23.286488 systemd[1600]: Reached target sockets.target - Sockets. Apr 13 20:09:23.286506 systemd[1600]: Reached target basic.target - Basic System. Apr 13 20:09:23.286553 systemd[1600]: Reached target default.target - Main User Target. Apr 13 20:09:23.286598 systemd[1600]: Startup finished in 176ms. Apr 13 20:09:23.287203 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:09:23.298146 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:09:23.839864 systemd[1]: Started sshd@1-172.239.193.192:22-50.85.169.122:33620.service - OpenSSH per-connection server daemon (50.85.169.122:33620). Apr 13 20:09:24.594775 sshd[1611]: Accepted publickey for core from 50.85.169.122 port 33620 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:24.595583 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:24.602028 systemd-logind[1439]: New session 2 of user core. Apr 13 20:09:24.609061 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:09:25.098792 sshd[1611]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:25.103685 systemd[1]: sshd@1-172.239.193.192:22-50.85.169.122:33620.service: Deactivated successfully. Apr 13 20:09:25.106345 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:09:25.108090 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:09:25.109521 systemd-logind[1439]: Removed session 2. Apr 13 20:09:25.232939 systemd[1]: Started sshd@2-172.239.193.192:22-50.85.169.122:33622.service - OpenSSH per-connection server daemon (50.85.169.122:33622). Apr 13 20:09:25.940071 sshd[1618]: Accepted publickey for core from 50.85.169.122 port 33622 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:25.942178 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:25.949617 systemd-logind[1439]: New session 3 of user core. Apr 13 20:09:25.961077 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:09:26.437435 sshd[1618]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:26.441765 systemd[1]: sshd@2-172.239.193.192:22-50.85.169.122:33622.service: Deactivated successfully. Apr 13 20:09:26.444003 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:09:26.445461 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:09:26.446910 systemd-logind[1439]: Removed session 3. Apr 13 20:09:26.573179 systemd[1]: Started sshd@3-172.239.193.192:22-50.85.169.122:33636.service - OpenSSH per-connection server daemon (50.85.169.122:33636). Apr 13 20:09:27.281619 sshd[1625]: Accepted publickey for core from 50.85.169.122 port 33636 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:27.283585 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:27.288419 systemd-logind[1439]: New session 4 of user core. Apr 13 20:09:27.296026 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:09:27.783778 sshd[1625]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:27.787776 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:09:27.788677 systemd[1]: sshd@3-172.239.193.192:22-50.85.169.122:33636.service: Deactivated successfully. Apr 13 20:09:27.790593 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:09:27.791429 systemd-logind[1439]: Removed session 4. Apr 13 20:09:27.907021 systemd[1]: Started sshd@4-172.239.193.192:22-50.85.169.122:33638.service - OpenSSH per-connection server daemon (50.85.169.122:33638). Apr 13 20:09:28.618928 sshd[1632]: Accepted publickey for core from 50.85.169.122 port 33638 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:28.620550 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:28.626068 systemd-logind[1439]: New session 5 of user core. Apr 13 20:09:28.637070 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:09:29.013037 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:09:29.013551 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:29.033842 sudo[1635]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:29.148206 sshd[1632]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:29.151979 systemd[1]: sshd@4-172.239.193.192:22-50.85.169.122:33638.service: Deactivated successfully. Apr 13 20:09:29.154074 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:09:29.155415 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:09:29.156708 systemd-logind[1439]: Removed session 5. Apr 13 20:09:29.271355 systemd[1]: Started sshd@5-172.239.193.192:22-50.85.169.122:33650.service - OpenSSH per-connection server daemon (50.85.169.122:33650). Apr 13 20:09:29.982501 sshd[1640]: Accepted publickey for core from 50.85.169.122 port 33650 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:29.984039 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:29.987943 systemd-logind[1439]: New session 6 of user core. Apr 13 20:09:29.997995 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:09:30.371799 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:09:30.372309 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:30.377865 sudo[1644]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:30.385899 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:09:30.386327 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:30.403117 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:09:30.405673 auditctl[1647]: No rules Apr 13 20:09:30.406227 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:09:30.406515 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:09:30.408939 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:09:30.452967 augenrules[1665]: No rules Apr 13 20:09:30.454683 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:09:30.456026 sudo[1643]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:30.571094 sshd[1640]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:30.574787 systemd[1]: sshd@5-172.239.193.192:22-50.85.169.122:33650.service: Deactivated successfully. Apr 13 20:09:30.576589 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:09:30.577941 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:09:30.578862 systemd-logind[1439]: Removed session 6. Apr 13 20:09:30.700155 systemd[1]: Started sshd@6-172.239.193.192:22-50.85.169.122:45736.service - OpenSSH per-connection server daemon (50.85.169.122:45736). Apr 13 20:09:31.411590 sshd[1673]: Accepted publickey for core from 50.85.169.122 port 45736 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:09:31.412244 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:09:31.416536 systemd-logind[1439]: New session 7 of user core. Apr 13 20:09:31.422002 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:09:31.801579 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:09:31.802135 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:09:31.802999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:09:31.811361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:32.006992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:32.010594 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:09:32.047822 kubelet[1694]: E0413 20:09:32.047782 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:09:32.054611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:09:32.054805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:09:32.110102 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:09:32.110313 (dockerd)[1708]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:09:32.364158 dockerd[1708]: time="2026-04-13T20:09:32.363445829Z" level=info msg="Starting up" Apr 13 20:09:32.457832 dockerd[1708]: time="2026-04-13T20:09:32.457793193Z" level=info msg="Loading containers: start." Apr 13 20:09:32.587897 kernel: Initializing XFRM netlink socket Apr 13 20:09:32.671542 systemd-networkd[1379]: docker0: Link UP Apr 13 20:09:32.688611 dockerd[1708]: time="2026-04-13T20:09:32.688575014Z" level=info msg="Loading containers: done." Apr 13 20:09:32.706372 dockerd[1708]: time="2026-04-13T20:09:32.706319371Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:09:32.706563 dockerd[1708]: time="2026-04-13T20:09:32.706436152Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:09:32.706603 dockerd[1708]: time="2026-04-13T20:09:32.706574702Z" level=info msg="Daemon has completed initialization" Apr 13 20:09:32.738494 dockerd[1708]: time="2026-04-13T20:09:32.738430824Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:09:32.738814 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:09:33.198222 containerd[1455]: time="2026-04-13T20:09:33.198176353Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\"" Apr 13 20:09:33.759650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471827462.mount: Deactivated successfully. Apr 13 20:09:34.652702 containerd[1455]: time="2026-04-13T20:09:34.652648057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:34.653691 containerd[1455]: time="2026-04-13T20:09:34.653669838Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.3: active requests=0, bytes read=27569702" Apr 13 20:09:34.654706 containerd[1455]: time="2026-04-13T20:09:34.654406759Z" level=info msg="ImageCreate event name:\"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:34.656777 containerd[1455]: time="2026-04-13T20:09:34.656749531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:34.657829 containerd[1455]: time="2026-04-13T20:09:34.657799943Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.3\" with image id \"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\", size \"27566295\" in 1.45958913s" Apr 13 20:09:34.657904 containerd[1455]: time="2026-04-13T20:09:34.657834283Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\" returns image reference \"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\"" Apr 13 20:09:34.658672 containerd[1455]: time="2026-04-13T20:09:34.658646053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\"" Apr 13 20:09:35.657986 containerd[1455]: time="2026-04-13T20:09:35.657932962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:35.658849 containerd[1455]: time="2026-04-13T20:09:35.658816263Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.3: active requests=0, bytes read=21449601" Apr 13 20:09:35.659369 containerd[1455]: time="2026-04-13T20:09:35.659329464Z" level=info msg="ImageCreate event name:\"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:35.661795 containerd[1455]: time="2026-04-13T20:09:35.661762906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:35.665273 containerd[1455]: time="2026-04-13T20:09:35.664893619Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.3\" with image id \"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\", size \"23014443\" in 1.006201846s" Apr 13 20:09:35.665273 containerd[1455]: time="2026-04-13T20:09:35.664922829Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\" returns image reference \"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\"" Apr 13 20:09:35.665679 containerd[1455]: time="2026-04-13T20:09:35.665655680Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\"" Apr 13 20:09:36.620121 containerd[1455]: time="2026-04-13T20:09:36.620060274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:36.621044 containerd[1455]: time="2026-04-13T20:09:36.621009445Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.3: active requests=0, bytes read=15548432" Apr 13 20:09:36.621688 containerd[1455]: time="2026-04-13T20:09:36.621400216Z" level=info msg="ImageCreate event name:\"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:36.623839 containerd[1455]: time="2026-04-13T20:09:36.623814338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:36.624861 containerd[1455]: time="2026-04-13T20:09:36.624829589Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.3\" with image id \"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\", size \"17113292\" in 959.142499ms" Apr 13 20:09:36.624992 containerd[1455]: time="2026-04-13T20:09:36.624976109Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\" returns image reference \"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\"" Apr 13 20:09:36.625847 containerd[1455]: time="2026-04-13T20:09:36.625828410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\"" Apr 13 20:09:37.566790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount25337514.mount: Deactivated successfully. Apr 13 20:09:37.790739 containerd[1455]: time="2026-04-13T20:09:37.790695885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:37.791588 containerd[1455]: time="2026-04-13T20:09:37.791483725Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.3: active requests=0, bytes read=25685327" Apr 13 20:09:37.792229 containerd[1455]: time="2026-04-13T20:09:37.792157616Z" level=info msg="ImageCreate event name:\"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:37.793711 containerd[1455]: time="2026-04-13T20:09:37.793672278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:37.794814 containerd[1455]: time="2026-04-13T20:09:37.794315358Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.3\" with image id \"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\", repo tag \"registry.k8s.io/kube-proxy:v1.35.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\", size \"25684340\" in 1.168460018s" Apr 13 20:09:37.794814 containerd[1455]: time="2026-04-13T20:09:37.794345738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\" returns image reference \"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\"" Apr 13 20:09:37.794814 containerd[1455]: time="2026-04-13T20:09:37.794778629Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 13 20:09:38.390699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1652837462.mount: Deactivated successfully. Apr 13 20:09:39.244548 containerd[1455]: time="2026-04-13T20:09:39.244476498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.245566 containerd[1455]: time="2026-04-13T20:09:39.245527769Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556548" Apr 13 20:09:39.247410 containerd[1455]: time="2026-04-13T20:09:39.246192260Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.248987 containerd[1455]: time="2026-04-13T20:09:39.248945743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.250116 containerd[1455]: time="2026-04-13T20:09:39.250006874Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.455203445s" Apr 13 20:09:39.250116 containerd[1455]: time="2026-04-13T20:09:39.250033984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 13 20:09:39.250815 containerd[1455]: time="2026-04-13T20:09:39.250793364Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 20:09:39.781824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount432912533.mount: Deactivated successfully. Apr 13 20:09:39.787119 containerd[1455]: time="2026-04-13T20:09:39.787045151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.787786 containerd[1455]: time="2026-04-13T20:09:39.787753991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Apr 13 20:09:39.788205 containerd[1455]: time="2026-04-13T20:09:39.788165152Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.791157 containerd[1455]: time="2026-04-13T20:09:39.790158924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.791157 containerd[1455]: time="2026-04-13T20:09:39.790923304Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 540.10215ms" Apr 13 20:09:39.791157 containerd[1455]: time="2026-04-13T20:09:39.790946655Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 13 20:09:39.791628 containerd[1455]: time="2026-04-13T20:09:39.791595565Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 13 20:09:40.381676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358017223.mount: Deactivated successfully. Apr 13 20:09:41.050824 containerd[1455]: time="2026-04-13T20:09:41.049861353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:41.050824 containerd[1455]: time="2026-04-13T20:09:41.050677924Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643885" Apr 13 20:09:41.050824 containerd[1455]: time="2026-04-13T20:09:41.050784024Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:41.054496 containerd[1455]: time="2026-04-13T20:09:41.054458758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:41.056100 containerd[1455]: time="2026-04-13T20:09:41.056061669Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.264436514s" Apr 13 20:09:41.056145 containerd[1455]: time="2026-04-13T20:09:41.056102219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 13 20:09:42.133685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:09:42.143525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:42.159393 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:09:42.159534 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:09:42.160050 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:42.176382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:42.213681 systemd[1]: Reloading requested from client PID 2072 ('systemctl') (unit session-7.scope)... Apr 13 20:09:42.213707 systemd[1]: Reloading... Apr 13 20:09:42.357900 zram_generator::config[2112]: No configuration found. Apr 13 20:09:42.459590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:42.531013 systemd[1]: Reloading finished in 316 ms. Apr 13 20:09:42.584465 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:09:42.584563 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:09:42.584822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:42.587542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:42.750133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:42.757328 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:09:42.791381 kubelet[2165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:09:43.238233 kubelet[2165]: I0413 20:09:43.238005 2165 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 20:09:43.238233 kubelet[2165]: I0413 20:09:43.238094 2165 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:09:43.239452 kubelet[2165]: I0413 20:09:43.239426 2165 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:09:43.239452 kubelet[2165]: I0413 20:09:43.239447 2165 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:09:43.239771 kubelet[2165]: I0413 20:09:43.239750 2165 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 20:09:43.244342 kubelet[2165]: E0413 20:09:43.244317 2165 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.193.192:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.193.192:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:09:43.244898 kubelet[2165]: I0413 20:09:43.244770 2165 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:09:43.249974 kubelet[2165]: E0413 20:09:43.249947 2165 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:09:43.250103 kubelet[2165]: I0413 20:09:43.250087 2165 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:09:43.254088 kubelet[2165]: I0413 20:09:43.253964 2165 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:09:43.254820 kubelet[2165]: I0413 20:09:43.254785 2165 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:09:43.255001 kubelet[2165]: I0413 20:09:43.254814 2165 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-193-192","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:09:43.255001 kubelet[2165]: I0413 20:09:43.254999 2165 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 20:09:43.255119 kubelet[2165]: I0413 20:09:43.255008 2165 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 20:09:43.255119 kubelet[2165]: I0413 20:09:43.255102 2165 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:09:43.256463 kubelet[2165]: I0413 20:09:43.256448 2165 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 20:09:43.256655 kubelet[2165]: I0413 20:09:43.256642 2165 kubelet.go:482] "Attempting to sync node with API server" Apr 13 20:09:43.256698 kubelet[2165]: I0413 20:09:43.256660 2165 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:09:43.256698 kubelet[2165]: I0413 20:09:43.256689 2165 kubelet.go:394] "Adding apiserver pod source" Apr 13 20:09:43.256753 kubelet[2165]: I0413 20:09:43.256700 2165 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:09:43.259191 kubelet[2165]: I0413 20:09:43.258853 2165 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:09:43.261403 kubelet[2165]: I0413 20:09:43.261117 2165 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:09:43.261403 kubelet[2165]: I0413 20:09:43.261145 2165 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:09:43.261403 kubelet[2165]: W0413 20:09:43.261203 2165 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:09:43.267616 kubelet[2165]: I0413 20:09:43.267594 2165 server.go:1257] "Started kubelet" Apr 13 20:09:43.267816 kubelet[2165]: I0413 20:09:43.267791 2165 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:09:43.268805 kubelet[2165]: I0413 20:09:43.268588 2165 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:09:43.272390 kubelet[2165]: I0413 20:09:43.271901 2165 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:09:43.272390 kubelet[2165]: I0413 20:09:43.271957 2165 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:09:43.272390 kubelet[2165]: I0413 20:09:43.272182 2165 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:09:43.274276 kubelet[2165]: E0413 20:09:43.272317 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.193.192:6443/api/v1/namespaces/default/events\": dial tcp 172.239.193.192:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-193-192.18a603876f555394 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-193-192,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-193-192,},FirstTimestamp:2026-04-13 20:09:43.26757058 +0000 UTC m=+0.506235307,LastTimestamp:2026-04-13 20:09:43.26757058 +0000 UTC m=+0.506235307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-193-192,}" Apr 13 20:09:43.275423 kubelet[2165]: I0413 20:09:43.275122 2165 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 20:09:43.276676 kubelet[2165]: I0413 20:09:43.276656 2165 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:09:43.282200 kubelet[2165]: E0413 20:09:43.281236 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-239-193-192\" not found" Apr 13 20:09:43.282200 kubelet[2165]: I0413 20:09:43.281259 2165 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 20:09:43.282200 kubelet[2165]: I0413 20:09:43.281376 2165 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:09:43.282200 kubelet[2165]: I0413 20:09:43.281412 2165 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:09:43.282200 kubelet[2165]: E0413 20:09:43.281832 2165 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.192:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-192?timeout=10s\": dial tcp 172.239.193.192:6443: connect: connection refused" interval="200ms" Apr 13 20:09:43.282714 kubelet[2165]: I0413 20:09:43.282696 2165 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:09:43.284300 kubelet[2165]: I0413 20:09:43.284284 2165 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:09:43.284369 kubelet[2165]: I0413 20:09:43.284360 2165 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:09:43.292754 kubelet[2165]: E0413 20:09:43.292725 2165 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:09:43.304566 kubelet[2165]: I0413 20:09:43.304544 2165 cpu_manager.go:225] "Starting" policy="none" Apr 13 20:09:43.304862 kubelet[2165]: I0413 20:09:43.304847 2165 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 20:09:43.304968 kubelet[2165]: I0413 20:09:43.304953 2165 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 20:09:43.308488 kubelet[2165]: I0413 20:09:43.308463 2165 policy_none.go:50] "Start" Apr 13 20:09:43.308488 kubelet[2165]: I0413 20:09:43.308488 2165 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:09:43.308599 kubelet[2165]: I0413 20:09:43.308508 2165 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:09:43.309775 kubelet[2165]: I0413 20:09:43.309728 2165 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:09:43.311278 kubelet[2165]: I0413 20:09:43.310368 2165 policy_none.go:44] "Start" Apr 13 20:09:43.312338 kubelet[2165]: I0413 20:09:43.312316 2165 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:09:43.312338 kubelet[2165]: I0413 20:09:43.312337 2165 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 20:09:43.312402 kubelet[2165]: I0413 20:09:43.312353 2165 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 20:09:43.312426 kubelet[2165]: E0413 20:09:43.312409 2165 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:09:43.318662 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:09:43.337268 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:09:43.341363 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:09:43.352981 kubelet[2165]: E0413 20:09:43.352953 2165 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:09:43.353693 kubelet[2165]: I0413 20:09:43.353346 2165 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 20:09:43.353693 kubelet[2165]: I0413 20:09:43.353364 2165 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:09:43.353995 kubelet[2165]: I0413 20:09:43.353957 2165 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 20:09:43.355309 kubelet[2165]: E0413 20:09:43.355289 2165 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:09:43.355493 kubelet[2165]: E0413 20:09:43.355477 2165 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-193-192\" not found" Apr 13 20:09:43.457079 kubelet[2165]: I0413 20:09:43.457047 2165 kubelet_node_status.go:74] "Attempting to register node" node="172-239-193-192" Apr 13 20:09:43.457373 kubelet[2165]: E0413 20:09:43.457350 2165 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.239.193.192:6443/api/v1/nodes\": dial tcp 172.239.193.192:6443: connect: connection refused" node="172-239-193-192" Apr 13 20:09:43.465378 systemd[1]: Created slice kubepods-burstable-pod4ffa323953e814dcd093c53c779b439b.slice - libcontainer container kubepods-burstable-pod4ffa323953e814dcd093c53c779b439b.slice. Apr 13 20:09:43.477354 kubelet[2165]: E0413 20:09:43.472692 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-192\" not found" node="172-239-193-192" Apr 13 20:09:43.482849 kubelet[2165]: I0413 20:09:43.482530 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/732d5ed7921691c68d9fda72a0e551cc-k8s-certs\") pod \"kube-controller-manager-172-239-193-192\" (UID: \"732d5ed7921691c68d9fda72a0e551cc\") " pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:43.482849 kubelet[2165]: I0413 20:09:43.482559 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/732d5ed7921691c68d9fda72a0e551cc-kubeconfig\") pod \"kube-controller-manager-172-239-193-192\" (UID: \"732d5ed7921691c68d9fda72a0e551cc\") " pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:43.482849 kubelet[2165]: I0413 20:09:43.482586 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/732d5ed7921691c68d9fda72a0e551cc-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-193-192\" (UID: \"732d5ed7921691c68d9fda72a0e551cc\") " pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:43.482849 kubelet[2165]: I0413 20:09:43.482604 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ffa323953e814dcd093c53c779b439b-ca-certs\") pod \"kube-apiserver-172-239-193-192\" (UID: \"4ffa323953e814dcd093c53c779b439b\") " pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:43.482849 kubelet[2165]: I0413 20:09:43.482617 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ffa323953e814dcd093c53c779b439b-k8s-certs\") pod \"kube-apiserver-172-239-193-192\" (UID: \"4ffa323953e814dcd093c53c779b439b\") " pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:43.483027 kubelet[2165]: I0413 20:09:43.482633 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ffa323953e814dcd093c53c779b439b-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-193-192\" (UID: \"4ffa323953e814dcd093c53c779b439b\") " pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:43.483027 kubelet[2165]: I0413 20:09:43.482647 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/732d5ed7921691c68d9fda72a0e551cc-ca-certs\") pod \"kube-controller-manager-172-239-193-192\" (UID: \"732d5ed7921691c68d9fda72a0e551cc\") " pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:43.483027 kubelet[2165]: I0413 20:09:43.482658 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/732d5ed7921691c68d9fda72a0e551cc-flexvolume-dir\") pod \"kube-controller-manager-172-239-193-192\" (UID: \"732d5ed7921691c68d9fda72a0e551cc\") " pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:43.483027 kubelet[2165]: E0413 20:09:43.482820 2165 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.192:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-192?timeout=10s\": dial tcp 172.239.193.192:6443: connect: connection refused" interval="400ms" Apr 13 20:09:43.490556 systemd[1]: Created slice kubepods-burstable-pod732d5ed7921691c68d9fda72a0e551cc.slice - libcontainer container kubepods-burstable-pod732d5ed7921691c68d9fda72a0e551cc.slice. Apr 13 20:09:43.493918 kubelet[2165]: E0413 20:09:43.493896 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-192\" not found" node="172-239-193-192" Apr 13 20:09:43.496173 systemd[1]: Created slice kubepods-burstable-pod868ad4a422290e8050c4a90c0ed92c12.slice - libcontainer container kubepods-burstable-pod868ad4a422290e8050c4a90c0ed92c12.slice. Apr 13 20:09:43.498034 kubelet[2165]: E0413 20:09:43.498007 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-192\" not found" node="172-239-193-192" Apr 13 20:09:43.583631 kubelet[2165]: I0413 20:09:43.583589 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/868ad4a422290e8050c4a90c0ed92c12-kubeconfig\") pod \"kube-scheduler-172-239-193-192\" (UID: \"868ad4a422290e8050c4a90c0ed92c12\") " pod="kube-system/kube-scheduler-172-239-193-192" Apr 13 20:09:43.659673 kubelet[2165]: I0413 20:09:43.659643 2165 kubelet_node_status.go:74] "Attempting to register node" node="172-239-193-192" Apr 13 20:09:43.660183 kubelet[2165]: E0413 20:09:43.660156 2165 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.239.193.192:6443/api/v1/nodes\": dial tcp 172.239.193.192:6443: connect: connection refused" node="172-239-193-192" Apr 13 20:09:43.779777 kubelet[2165]: E0413 20:09:43.779629 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:43.780823 containerd[1455]: time="2026-04-13T20:09:43.780781013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-193-192,Uid:4ffa323953e814dcd093c53c779b439b,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:43.796381 kubelet[2165]: E0413 20:09:43.796168 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:43.796674 containerd[1455]: time="2026-04-13T20:09:43.796569509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-193-192,Uid:732d5ed7921691c68d9fda72a0e551cc,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:43.800048 kubelet[2165]: E0413 20:09:43.799845 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:43.800269 containerd[1455]: time="2026-04-13T20:09:43.800225423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-193-192,Uid:868ad4a422290e8050c4a90c0ed92c12,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:43.884301 kubelet[2165]: E0413 20:09:43.884250 2165 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.192:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-192?timeout=10s\": dial tcp 172.239.193.192:6443: connect: connection refused" interval="800ms" Apr 13 20:09:44.062801 kubelet[2165]: I0413 20:09:44.062691 2165 kubelet_node_status.go:74] "Attempting to register node" node="172-239-193-192" Apr 13 20:09:44.063357 kubelet[2165]: E0413 20:09:44.063328 2165 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.239.193.192:6443/api/v1/nodes\": dial tcp 172.239.193.192:6443: connect: connection refused" node="172-239-193-192" Apr 13 20:09:44.309067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850855915.mount: Deactivated successfully. Apr 13 20:09:44.313934 containerd[1455]: time="2026-04-13T20:09:44.313790056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:44.314641 containerd[1455]: time="2026-04-13T20:09:44.314600977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 13 20:09:44.315447 containerd[1455]: time="2026-04-13T20:09:44.315343078Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:44.316261 containerd[1455]: time="2026-04-13T20:09:44.316122409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:09:44.317385 containerd[1455]: time="2026-04-13T20:09:44.317141110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:09:44.317385 containerd[1455]: time="2026-04-13T20:09:44.317214940Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:44.320401 containerd[1455]: time="2026-04-13T20:09:44.320178573Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:44.321234 containerd[1455]: time="2026-04-13T20:09:44.320997224Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 524.363125ms" Apr 13 20:09:44.322238 containerd[1455]: time="2026-04-13T20:09:44.322210965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.341431ms" Apr 13 20:09:44.323008 containerd[1455]: time="2026-04-13T20:09:44.322975886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 522.691153ms" Apr 13 20:09:44.323643 containerd[1455]: time="2026-04-13T20:09:44.323613416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:44.428329 containerd[1455]: time="2026-04-13T20:09:44.428128581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:44.429923 containerd[1455]: time="2026-04-13T20:09:44.429059292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:44.429923 containerd[1455]: time="2026-04-13T20:09:44.429077532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:44.429923 containerd[1455]: time="2026-04-13T20:09:44.429162522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:44.431111 containerd[1455]: time="2026-04-13T20:09:44.430846633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:44.431111 containerd[1455]: time="2026-04-13T20:09:44.430911643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:44.431111 containerd[1455]: time="2026-04-13T20:09:44.430925253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:44.431111 containerd[1455]: time="2026-04-13T20:09:44.431003104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:44.432819 containerd[1455]: time="2026-04-13T20:09:44.431748734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:44.432819 containerd[1455]: time="2026-04-13T20:09:44.431807084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:44.432819 containerd[1455]: time="2026-04-13T20:09:44.431822164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:44.432819 containerd[1455]: time="2026-04-13T20:09:44.431907834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:44.464053 systemd[1]: Started cri-containerd-abac1df32a286b48a1c3cd910b6794b2832bd44ed4813bee06072b85db29e8ed.scope - libcontainer container abac1df32a286b48a1c3cd910b6794b2832bd44ed4813bee06072b85db29e8ed. Apr 13 20:09:44.468021 systemd[1]: Started cri-containerd-a2c6c52091f628a086ca526800f7476396c6881383705083b1d7154ec4f9c4a9.scope - libcontainer container a2c6c52091f628a086ca526800f7476396c6881383705083b1d7154ec4f9c4a9. Apr 13 20:09:44.481071 systemd[1]: Started cri-containerd-d61bc7bae62200f9843c86af88971c3d12525e29cf1192e27c099f29ddf061e9.scope - libcontainer container d61bc7bae62200f9843c86af88971c3d12525e29cf1192e27c099f29ddf061e9. Apr 13 20:09:44.542546 containerd[1455]: time="2026-04-13T20:09:44.542420825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-193-192,Uid:732d5ed7921691c68d9fda72a0e551cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"abac1df32a286b48a1c3cd910b6794b2832bd44ed4813bee06072b85db29e8ed\"" Apr 13 20:09:44.550899 kubelet[2165]: E0413 20:09:44.549979 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:44.552289 containerd[1455]: time="2026-04-13T20:09:44.552258305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-193-192,Uid:868ad4a422290e8050c4a90c0ed92c12,Namespace:kube-system,Attempt:0,} returns sandbox id \"d61bc7bae62200f9843c86af88971c3d12525e29cf1192e27c099f29ddf061e9\"" Apr 13 20:09:44.555590 kubelet[2165]: E0413 20:09:44.555276 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:44.557104 containerd[1455]: time="2026-04-13T20:09:44.556739199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-193-192,Uid:4ffa323953e814dcd093c53c779b439b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2c6c52091f628a086ca526800f7476396c6881383705083b1d7154ec4f9c4a9\"" Apr 13 20:09:44.557944 kubelet[2165]: E0413 20:09:44.557927 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:44.560079 containerd[1455]: time="2026-04-13T20:09:44.560048063Z" level=info msg="CreateContainer within sandbox \"abac1df32a286b48a1c3cd910b6794b2832bd44ed4813bee06072b85db29e8ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:09:44.560536 containerd[1455]: time="2026-04-13T20:09:44.560498463Z" level=info msg="CreateContainer within sandbox \"d61bc7bae62200f9843c86af88971c3d12525e29cf1192e27c099f29ddf061e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:09:44.562391 containerd[1455]: time="2026-04-13T20:09:44.562366565Z" level=info msg="CreateContainer within sandbox \"a2c6c52091f628a086ca526800f7476396c6881383705083b1d7154ec4f9c4a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:09:44.575074 containerd[1455]: time="2026-04-13T20:09:44.574959057Z" level=info msg="CreateContainer within sandbox \"d61bc7bae62200f9843c86af88971c3d12525e29cf1192e27c099f29ddf061e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7f181fa1285c0ceaa318c2cb3910b7c3ec3ff7aea1beacea948ba414428740e4\"" Apr 13 20:09:44.576175 containerd[1455]: time="2026-04-13T20:09:44.576097189Z" level=info msg="StartContainer for \"7f181fa1285c0ceaa318c2cb3910b7c3ec3ff7aea1beacea948ba414428740e4\"" Apr 13 20:09:44.579048 containerd[1455]: time="2026-04-13T20:09:44.579016161Z" level=info msg="CreateContainer within sandbox \"a2c6c52091f628a086ca526800f7476396c6881383705083b1d7154ec4f9c4a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"22e108f6ed6eb5a7c3825521267aa039e6fbae5c88d768ae525437bc7d62c6ad\"" Apr 13 20:09:44.579985 containerd[1455]: time="2026-04-13T20:09:44.579948532Z" level=info msg="CreateContainer within sandbox \"abac1df32a286b48a1c3cd910b6794b2832bd44ed4813bee06072b85db29e8ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a3fe1c614caf6b6c4d0a2811289bd2ed62cd15f67dbb54dd075563217fc795b\"" Apr 13 20:09:44.580413 containerd[1455]: time="2026-04-13T20:09:44.580387083Z" level=info msg="StartContainer for \"1a3fe1c614caf6b6c4d0a2811289bd2ed62cd15f67dbb54dd075563217fc795b\"" Apr 13 20:09:44.582052 containerd[1455]: time="2026-04-13T20:09:44.581225274Z" level=info msg="StartContainer for \"22e108f6ed6eb5a7c3825521267aa039e6fbae5c88d768ae525437bc7d62c6ad\"" Apr 13 20:09:44.620056 systemd[1]: Started cri-containerd-22e108f6ed6eb5a7c3825521267aa039e6fbae5c88d768ae525437bc7d62c6ad.scope - libcontainer container 22e108f6ed6eb5a7c3825521267aa039e6fbae5c88d768ae525437bc7d62c6ad. Apr 13 20:09:44.629018 systemd[1]: Started cri-containerd-1a3fe1c614caf6b6c4d0a2811289bd2ed62cd15f67dbb54dd075563217fc795b.scope - libcontainer container 1a3fe1c614caf6b6c4d0a2811289bd2ed62cd15f67dbb54dd075563217fc795b. Apr 13 20:09:44.631976 systemd[1]: Started cri-containerd-7f181fa1285c0ceaa318c2cb3910b7c3ec3ff7aea1beacea948ba414428740e4.scope - libcontainer container 7f181fa1285c0ceaa318c2cb3910b7c3ec3ff7aea1beacea948ba414428740e4. Apr 13 20:09:44.692908 kubelet[2165]: E0413 20:09:44.691104 2165 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.193.192:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-193-192?timeout=10s\": dial tcp 172.239.193.192:6443: connect: connection refused" interval="1.6s" Apr 13 20:09:44.705024 containerd[1455]: time="2026-04-13T20:09:44.704760517Z" level=info msg="StartContainer for \"22e108f6ed6eb5a7c3825521267aa039e6fbae5c88d768ae525437bc7d62c6ad\" returns successfully" Apr 13 20:09:44.716259 containerd[1455]: time="2026-04-13T20:09:44.715252348Z" level=info msg="StartContainer for \"1a3fe1c614caf6b6c4d0a2811289bd2ed62cd15f67dbb54dd075563217fc795b\" returns successfully" Apr 13 20:09:44.738685 containerd[1455]: time="2026-04-13T20:09:44.738642591Z" level=info msg="StartContainer for \"7f181fa1285c0ceaa318c2cb3910b7c3ec3ff7aea1beacea948ba414428740e4\" returns successfully" Apr 13 20:09:44.866665 kubelet[2165]: I0413 20:09:44.866559 2165 kubelet_node_status.go:74] "Attempting to register node" node="172-239-193-192" Apr 13 20:09:45.324416 kubelet[2165]: E0413 20:09:45.324297 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-192\" not found" node="172-239-193-192" Apr 13 20:09:45.324839 kubelet[2165]: E0413 20:09:45.324700 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:45.327289 kubelet[2165]: E0413 20:09:45.327102 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-192\" not found" node="172-239-193-192" Apr 13 20:09:45.327289 kubelet[2165]: E0413 20:09:45.327195 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:45.329447 kubelet[2165]: E0413 20:09:45.329243 2165 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-193-192\" not found" node="172-239-193-192" Apr 13 20:09:45.329547 kubelet[2165]: E0413 20:09:45.329533 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:46.015910 kubelet[2165]: I0413 20:09:46.015844 2165 kubelet_node_status.go:77] "Successfully registered node" node="172-239-193-192" Apr 13 20:09:46.016304 kubelet[2165]: E0413 20:09:46.015922 2165 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"172-239-193-192\": node \"172-239-193-192\" not found" Apr 13 20:09:46.025935 kubelet[2165]: E0413 20:09:46.025906 2165 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-239-193-192\" not found" Apr 13 20:09:46.082191 kubelet[2165]: I0413 20:09:46.081932 2165 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:46.086857 kubelet[2165]: E0413 20:09:46.086831 2165 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-193-192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:46.086857 kubelet[2165]: I0413 20:09:46.086853 2165 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:46.088086 kubelet[2165]: E0413 20:09:46.088055 2165 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-193-192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:46.088086 kubelet[2165]: I0413 20:09:46.088084 2165 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-192" Apr 13 20:09:46.089085 kubelet[2165]: E0413 20:09:46.089059 2165 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-193-192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-193-192" Apr 13 20:09:46.259956 kubelet[2165]: I0413 20:09:46.259916 2165 apiserver.go:52] "Watching apiserver" Apr 13 20:09:46.281959 kubelet[2165]: I0413 20:09:46.281788 2165 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:09:46.331481 kubelet[2165]: I0413 20:09:46.330785 2165 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:46.331481 kubelet[2165]: I0413 20:09:46.330930 2165 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-192" Apr 13 20:09:46.336296 kubelet[2165]: E0413 20:09:46.336260 2165 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-193-192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-193-192" Apr 13 20:09:46.336527 kubelet[2165]: E0413 20:09:46.336498 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:46.337206 kubelet[2165]: E0413 20:09:46.337183 2165 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-193-192\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:46.337333 kubelet[2165]: E0413 20:09:46.337312 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:47.332201 kubelet[2165]: I0413 20:09:47.331927 2165 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:47.339293 kubelet[2165]: E0413 20:09:47.339050 2165 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:47.723848 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-7.scope)... Apr 13 20:09:47.723895 systemd[1]: Reloading... Apr 13 20:09:47.907937 zram_generator::config[2504]: No configuration found. Apr 13 20:09:48.004045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:48.089096 systemd[1]: Reloading finished in 364 ms. Apr 13 20:09:48.137256 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:48.146473 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:09:48.146851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:48.153051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:48.305372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:48.314370 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:09:48.359909 kubelet[2546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:09:48.367265 kubelet[2546]: I0413 20:09:48.367193 2546 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 20:09:48.367265 kubelet[2546]: I0413 20:09:48.367253 2546 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:09:48.367386 kubelet[2546]: I0413 20:09:48.367279 2546 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:09:48.367386 kubelet[2546]: I0413 20:09:48.367287 2546 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:09:48.367698 kubelet[2546]: I0413 20:09:48.367664 2546 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 20:09:48.370897 kubelet[2546]: I0413 20:09:48.369726 2546 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:09:48.373279 kubelet[2546]: I0413 20:09:48.373255 2546 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:09:48.375742 kubelet[2546]: E0413 20:09:48.375718 2546 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:09:48.375835 kubelet[2546]: I0413 20:09:48.375823 2546 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:09:48.379379 kubelet[2546]: I0413 20:09:48.379357 2546 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:09:48.379692 kubelet[2546]: I0413 20:09:48.379666 2546 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:09:48.379899 kubelet[2546]: I0413 20:09:48.379734 2546 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-193-192","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:09:48.380019 kubelet[2546]: I0413 20:09:48.380007 2546 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 20:09:48.380063 kubelet[2546]: I0413 20:09:48.380054 2546 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 20:09:48.380116 kubelet[2546]: I0413 20:09:48.380107 2546 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:09:48.380335 kubelet[2546]: I0413 20:09:48.380322 2546 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 20:09:48.380560 kubelet[2546]: I0413 20:09:48.380547 2546 kubelet.go:482] "Attempting to sync node with API server" Apr 13 20:09:48.380619 kubelet[2546]: I0413 20:09:48.380609 2546 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:09:48.380678 kubelet[2546]: I0413 20:09:48.380669 2546 kubelet.go:394] "Adding apiserver pod source" Apr 13 20:09:48.380723 kubelet[2546]: I0413 20:09:48.380715 2546 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:09:48.383201 kubelet[2546]: I0413 20:09:48.383178 2546 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:09:48.384475 kubelet[2546]: I0413 20:09:48.384455 2546 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:09:48.384515 kubelet[2546]: I0413 20:09:48.384486 2546 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:09:48.387921 kubelet[2546]: I0413 20:09:48.387535 2546 server.go:1257] "Started kubelet" Apr 13 20:09:48.394129 kubelet[2546]: I0413 20:09:48.393539 2546 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 20:09:48.396198 kubelet[2546]: I0413 20:09:48.396184 2546 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:09:48.397094 kubelet[2546]: I0413 20:09:48.396376 2546 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:09:48.400176 kubelet[2546]: I0413 20:09:48.400159 2546 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:09:48.408904 kubelet[2546]: I0413 20:09:48.406585 2546 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:09:48.409035 kubelet[2546]: I0413 20:09:48.409018 2546 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:09:48.409278 kubelet[2546]: I0413 20:09:48.409263 2546 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:09:48.409360 kubelet[2546]: E0413 20:09:48.406791 2546 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-239-193-192\" not found" Apr 13 20:09:48.409693 kubelet[2546]: I0413 20:09:48.406655 2546 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 20:09:48.409790 kubelet[2546]: I0413 20:09:48.406667 2546 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:09:48.409962 kubelet[2546]: I0413 20:09:48.409949 2546 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:09:48.410251 kubelet[2546]: I0413 20:09:48.410218 2546 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:09:48.415017 kubelet[2546]: I0413 20:09:48.415000 2546 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:09:48.415165 kubelet[2546]: I0413 20:09:48.415146 2546 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:09:48.421418 kubelet[2546]: I0413 20:09:48.421400 2546 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:09:48.426370 kubelet[2546]: I0413 20:09:48.426255 2546 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:09:48.426370 kubelet[2546]: I0413 20:09:48.426278 2546 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 20:09:48.426370 kubelet[2546]: I0413 20:09:48.426298 2546 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 20:09:48.426370 kubelet[2546]: E0413 20:09:48.426349 2546 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:09:48.495640 kubelet[2546]: I0413 20:09:48.495544 2546 cpu_manager.go:225] "Starting" policy="none" Apr 13 20:09:48.495640 kubelet[2546]: I0413 20:09:48.495561 2546 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 20:09:48.495640 kubelet[2546]: I0413 20:09:48.495581 2546 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 20:09:48.496169 kubelet[2546]: I0413 20:09:48.496100 2546 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 13 20:09:48.496342 kubelet[2546]: I0413 20:09:48.496118 2546 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 13 20:09:48.496342 kubelet[2546]: I0413 20:09:48.496327 2546 policy_none.go:50] "Start" Apr 13 20:09:48.496342 kubelet[2546]: I0413 20:09:48.496339 2546 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:09:48.496952 kubelet[2546]: I0413 20:09:48.496354 2546 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:09:48.497044 kubelet[2546]: I0413 20:09:48.497031 2546 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 20:09:48.497097 kubelet[2546]: I0413 20:09:48.497048 2546 policy_none.go:44] "Start" Apr 13 20:09:48.502720 kubelet[2546]: E0413 20:09:48.502683 2546 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:09:48.503470 kubelet[2546]: I0413 20:09:48.503449 2546 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 20:09:48.504265 kubelet[2546]: I0413 20:09:48.503538 2546 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:09:48.504265 kubelet[2546]: I0413 20:09:48.503805 2546 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 20:09:48.507498 kubelet[2546]: E0413 20:09:48.507481 2546 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:09:48.527698 kubelet[2546]: I0413 20:09:48.526960 2546 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-192" Apr 13 20:09:48.528017 kubelet[2546]: I0413 20:09:48.527027 2546 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:48.529049 kubelet[2546]: I0413 20:09:48.527141 2546 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:48.535989 kubelet[2546]: E0413 20:09:48.535922 2546 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-193-192\" already exists" pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:48.606244 kubelet[2546]: I0413 20:09:48.606207 2546 kubelet_node_status.go:74] "Attempting to register node" node="172-239-193-192" Apr 13 20:09:48.614612 kubelet[2546]: I0413 20:09:48.614536 2546 kubelet_node_status.go:123] "Node was previously registered" node="172-239-193-192" Apr 13 20:09:48.614612 kubelet[2546]: I0413 20:09:48.614623 2546 kubelet_node_status.go:77] "Successfully registered node" node="172-239-193-192" Apr 13 20:09:48.711284 kubelet[2546]: I0413 20:09:48.710971 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ffa323953e814dcd093c53c779b439b-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-193-192\" (UID: \"4ffa323953e814dcd093c53c779b439b\") " pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:48.711284 kubelet[2546]: I0413 20:09:48.711027 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/732d5ed7921691c68d9fda72a0e551cc-flexvolume-dir\") pod \"kube-controller-manager-172-239-193-192\" (UID: \"732d5ed7921691c68d9fda72a0e551cc\") " pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:48.711284 kubelet[2546]: I0413 20:09:48.711046 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/732d5ed7921691c68d9fda72a0e551cc-k8s-certs\") pod \"kube-controller-manager-172-239-193-192\" (UID: \"732d5ed7921691c68d9fda72a0e551cc\") " pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:48.711284 kubelet[2546]: I0413 20:09:48.711064 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/732d5ed7921691c68d9fda72a0e551cc-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-193-192\" (UID: \"732d5ed7921691c68d9fda72a0e551cc\") " pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:48.711284 kubelet[2546]: I0413 20:09:48.711102 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/868ad4a422290e8050c4a90c0ed92c12-kubeconfig\") pod \"kube-scheduler-172-239-193-192\" (UID: \"868ad4a422290e8050c4a90c0ed92c12\") " pod="kube-system/kube-scheduler-172-239-193-192" Apr 13 20:09:48.711725 kubelet[2546]: I0413 20:09:48.711167 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ffa323953e814dcd093c53c779b439b-ca-certs\") pod \"kube-apiserver-172-239-193-192\" (UID: \"4ffa323953e814dcd093c53c779b439b\") " pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:48.711725 kubelet[2546]: I0413 20:09:48.711283 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ffa323953e814dcd093c53c779b439b-k8s-certs\") pod \"kube-apiserver-172-239-193-192\" (UID: \"4ffa323953e814dcd093c53c779b439b\") " pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:48.711725 kubelet[2546]: I0413 20:09:48.711391 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/732d5ed7921691c68d9fda72a0e551cc-ca-certs\") pod \"kube-controller-manager-172-239-193-192\" (UID: \"732d5ed7921691c68d9fda72a0e551cc\") " pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:48.711725 kubelet[2546]: I0413 20:09:48.711417 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/732d5ed7921691c68d9fda72a0e551cc-kubeconfig\") pod \"kube-controller-manager-172-239-193-192\" (UID: \"732d5ed7921691c68d9fda72a0e551cc\") " pod="kube-system/kube-controller-manager-172-239-193-192" Apr 13 20:09:48.837103 kubelet[2546]: E0413 20:09:48.836492 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:48.837683 kubelet[2546]: E0413 20:09:48.837666 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:48.838798 kubelet[2546]: E0413 20:09:48.838500 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:49.388067 kubelet[2546]: I0413 20:09:49.387758 2546 apiserver.go:52] "Watching apiserver" Apr 13 20:09:49.410066 kubelet[2546]: I0413 20:09:49.409985 2546 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:09:49.460559 kubelet[2546]: I0413 20:09:49.460246 2546 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:49.460828 kubelet[2546]: E0413 20:09:49.460679 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:49.461301 kubelet[2546]: I0413 20:09:49.461145 2546 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-193-192" Apr 13 20:09:49.474824 kubelet[2546]: E0413 20:09:49.474784 2546 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-193-192\" already exists" pod="kube-system/kube-apiserver-172-239-193-192" Apr 13 20:09:49.475004 kubelet[2546]: E0413 20:09:49.474981 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:49.475257 kubelet[2546]: E0413 20:09:49.475209 2546 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-193-192\" already exists" pod="kube-system/kube-scheduler-172-239-193-192" Apr 13 20:09:49.475387 kubelet[2546]: E0413 20:09:49.475368 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:49.561851 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 20:09:49.891727 kubelet[2546]: I0413 20:09:49.891616 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-193-192" podStartSLOduration=1.891597763 podStartE2EDuration="1.891597763s" podCreationTimestamp="2026-04-13 20:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:49.882109773 +0000 UTC m=+1.562791823" watchObservedRunningTime="2026-04-13 20:09:49.891597763 +0000 UTC m=+1.572279813" Apr 13 20:09:49.900168 kubelet[2546]: I0413 20:09:49.899848 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-193-192" podStartSLOduration=2.899825871 podStartE2EDuration="2.899825871s" podCreationTimestamp="2026-04-13 20:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:49.891963733 +0000 UTC m=+1.572645783" watchObservedRunningTime="2026-04-13 20:09:49.899825871 +0000 UTC m=+1.580507931" Apr 13 20:09:50.461095 kubelet[2546]: E0413 20:09:50.460972 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:50.463892 kubelet[2546]: E0413 20:09:50.462206 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:50.463892 kubelet[2546]: E0413 20:09:50.462427 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:50.471747 kubelet[2546]: I0413 20:09:50.471705 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-193-192" podStartSLOduration=2.471694303 podStartE2EDuration="2.471694303s" podCreationTimestamp="2026-04-13 20:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:49.900958892 +0000 UTC m=+1.581640952" watchObservedRunningTime="2026-04-13 20:09:50.471694303 +0000 UTC m=+2.152376383" Apr 13 20:09:51.465519 kubelet[2546]: E0413 20:09:51.465455 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:53.366191 kubelet[2546]: I0413 20:09:53.366146 2546 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:09:53.366677 kubelet[2546]: I0413 20:09:53.366581 2546 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:09:53.366713 containerd[1455]: time="2026-04-13T20:09:53.366454803Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:09:54.474458 systemd[1]: Created slice kubepods-besteffort-pod5fd9ffdf_0f45_4b26_a11b_2736c357196a.slice - libcontainer container kubepods-besteffort-pod5fd9ffdf_0f45_4b26_a11b_2736c357196a.slice. Apr 13 20:09:54.546495 kubelet[2546]: I0413 20:09:54.546376 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5fd9ffdf-0f45-4b26-a11b-2736c357196a-kube-proxy\") pod \"kube-proxy-ggjtc\" (UID: \"5fd9ffdf-0f45-4b26-a11b-2736c357196a\") " pod="kube-system/kube-proxy-ggjtc" Apr 13 20:09:54.546495 kubelet[2546]: I0413 20:09:54.546406 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fd9ffdf-0f45-4b26-a11b-2736c357196a-xtables-lock\") pod \"kube-proxy-ggjtc\" (UID: \"5fd9ffdf-0f45-4b26-a11b-2736c357196a\") " pod="kube-system/kube-proxy-ggjtc" Apr 13 20:09:54.546495 kubelet[2546]: I0413 20:09:54.546422 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fd9ffdf-0f45-4b26-a11b-2736c357196a-lib-modules\") pod \"kube-proxy-ggjtc\" (UID: \"5fd9ffdf-0f45-4b26-a11b-2736c357196a\") " pod="kube-system/kube-proxy-ggjtc" Apr 13 20:09:54.546495 kubelet[2546]: I0413 20:09:54.546435 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrjkt\" (UniqueName: \"kubernetes.io/projected/5fd9ffdf-0f45-4b26-a11b-2736c357196a-kube-api-access-lrjkt\") pod \"kube-proxy-ggjtc\" (UID: \"5fd9ffdf-0f45-4b26-a11b-2736c357196a\") " pod="kube-system/kube-proxy-ggjtc" Apr 13 20:09:54.617112 systemd[1]: Created slice kubepods-besteffort-pod536b1c79_9848_4c8e_8452_297881675d28.slice - libcontainer container kubepods-besteffort-pod536b1c79_9848_4c8e_8452_297881675d28.slice. Apr 13 20:09:54.647910 kubelet[2546]: I0413 20:09:54.647284 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqcg7\" (UniqueName: \"kubernetes.io/projected/536b1c79-9848-4c8e-8452-297881675d28-kube-api-access-qqcg7\") pod \"tigera-operator-6cf4cccc57-fxqjt\" (UID: \"536b1c79-9848-4c8e-8452-297881675d28\") " pod="tigera-operator/tigera-operator-6cf4cccc57-fxqjt" Apr 13 20:09:54.647910 kubelet[2546]: I0413 20:09:54.647323 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/536b1c79-9848-4c8e-8452-297881675d28-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-fxqjt\" (UID: \"536b1c79-9848-4c8e-8452-297881675d28\") " pod="tigera-operator/tigera-operator-6cf4cccc57-fxqjt" Apr 13 20:09:54.779859 kubelet[2546]: E0413 20:09:54.779769 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:54.780480 containerd[1455]: time="2026-04-13T20:09:54.780451324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggjtc,Uid:5fd9ffdf-0f45-4b26-a11b-2736c357196a,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:54.800736 containerd[1455]: time="2026-04-13T20:09:54.800498181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:54.800736 containerd[1455]: time="2026-04-13T20:09:54.800545441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:54.800736 containerd[1455]: time="2026-04-13T20:09:54.800563881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:54.800736 containerd[1455]: time="2026-04-13T20:09:54.800663581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:54.824020 systemd[1]: Started cri-containerd-bfc19fd74a9457276bc91937f5d792d236d0a02010c87545e804564208d5e6d4.scope - libcontainer container bfc19fd74a9457276bc91937f5d792d236d0a02010c87545e804564208d5e6d4. Apr 13 20:09:54.848392 containerd[1455]: time="2026-04-13T20:09:54.848108882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggjtc,Uid:5fd9ffdf-0f45-4b26-a11b-2736c357196a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfc19fd74a9457276bc91937f5d792d236d0a02010c87545e804564208d5e6d4\"" Apr 13 20:09:54.848767 kubelet[2546]: E0413 20:09:54.848748 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:54.852328 containerd[1455]: time="2026-04-13T20:09:54.852239375Z" level=info msg="CreateContainer within sandbox \"bfc19fd74a9457276bc91937f5d792d236d0a02010c87545e804564208d5e6d4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:09:54.863545 containerd[1455]: time="2026-04-13T20:09:54.863512497Z" level=info msg="CreateContainer within sandbox \"bfc19fd74a9457276bc91937f5d792d236d0a02010c87545e804564208d5e6d4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b157f93ae49fa65b9edda16f210c43c6bf8d01b67c72427dff594821c9ad6a7b\"" Apr 13 20:09:54.864713 containerd[1455]: time="2026-04-13T20:09:54.864685755Z" level=info msg="StartContainer for \"b157f93ae49fa65b9edda16f210c43c6bf8d01b67c72427dff594821c9ad6a7b\"" Apr 13 20:09:54.898024 systemd[1]: Started cri-containerd-b157f93ae49fa65b9edda16f210c43c6bf8d01b67c72427dff594821c9ad6a7b.scope - libcontainer container b157f93ae49fa65b9edda16f210c43c6bf8d01b67c72427dff594821c9ad6a7b. Apr 13 20:09:54.924908 containerd[1455]: time="2026-04-13T20:09:54.923835307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-fxqjt,Uid:536b1c79-9848-4c8e-8452-297881675d28,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:09:54.932516 containerd[1455]: time="2026-04-13T20:09:54.932263022Z" level=info msg="StartContainer for \"b157f93ae49fa65b9edda16f210c43c6bf8d01b67c72427dff594821c9ad6a7b\" returns successfully" Apr 13 20:09:54.948124 containerd[1455]: time="2026-04-13T20:09:54.947859067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:54.948124 containerd[1455]: time="2026-04-13T20:09:54.947952187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:54.948124 containerd[1455]: time="2026-04-13T20:09:54.947967116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:54.948124 containerd[1455]: time="2026-04-13T20:09:54.948034426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:54.977239 systemd[1]: Started cri-containerd-6028918db9c04695fe1b52a4d45b02a5ef005f9e0be88c7eb40a1e893dc01947.scope - libcontainer container 6028918db9c04695fe1b52a4d45b02a5ef005f9e0be88c7eb40a1e893dc01947. Apr 13 20:09:55.018263 containerd[1455]: time="2026-04-13T20:09:55.018130603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-fxqjt,Uid:536b1c79-9848-4c8e-8452-297881675d28,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6028918db9c04695fe1b52a4d45b02a5ef005f9e0be88c7eb40a1e893dc01947\"" Apr 13 20:09:55.022650 containerd[1455]: time="2026-04-13T20:09:55.022165487Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:09:55.476086 kubelet[2546]: E0413 20:09:55.475959 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:55.699800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173630066.mount: Deactivated successfully. Apr 13 20:09:56.768720 containerd[1455]: time="2026-04-13T20:09:56.768670250Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:56.769488 containerd[1455]: time="2026-04-13T20:09:56.769399449Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:09:56.770202 containerd[1455]: time="2026-04-13T20:09:56.770152859Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:56.772042 containerd[1455]: time="2026-04-13T20:09:56.772008226Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:56.773107 containerd[1455]: time="2026-04-13T20:09:56.772667565Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.750475258s" Apr 13 20:09:56.773107 containerd[1455]: time="2026-04-13T20:09:56.772693555Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:09:56.776714 containerd[1455]: time="2026-04-13T20:09:56.776678869Z" level=info msg="CreateContainer within sandbox \"6028918db9c04695fe1b52a4d45b02a5ef005f9e0be88c7eb40a1e893dc01947\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:09:56.786818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003992352.mount: Deactivated successfully. Apr 13 20:09:56.788052 containerd[1455]: time="2026-04-13T20:09:56.788029485Z" level=info msg="CreateContainer within sandbox \"6028918db9c04695fe1b52a4d45b02a5ef005f9e0be88c7eb40a1e893dc01947\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"82efaa567955649d73022949a7e1b9280fa66cb8688ddac7cc8bfd79160f0d7c\"" Apr 13 20:09:56.788982 containerd[1455]: time="2026-04-13T20:09:56.788909384Z" level=info msg="StartContainer for \"82efaa567955649d73022949a7e1b9280fa66cb8688ddac7cc8bfd79160f0d7c\"" Apr 13 20:09:56.816003 systemd[1]: Started cri-containerd-82efaa567955649d73022949a7e1b9280fa66cb8688ddac7cc8bfd79160f0d7c.scope - libcontainer container 82efaa567955649d73022949a7e1b9280fa66cb8688ddac7cc8bfd79160f0d7c. Apr 13 20:09:56.843279 containerd[1455]: time="2026-04-13T20:09:56.843042331Z" level=info msg="StartContainer for \"82efaa567955649d73022949a7e1b9280fa66cb8688ddac7cc8bfd79160f0d7c\" returns successfully" Apr 13 20:09:57.127845 kubelet[2546]: E0413 20:09:57.127802 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:57.136160 kubelet[2546]: I0413 20:09:57.136041 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-ggjtc" podStartSLOduration=3.136030779 podStartE2EDuration="3.136030779s" podCreationTimestamp="2026-04-13 20:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:55.483691349 +0000 UTC m=+7.164373409" watchObservedRunningTime="2026-04-13 20:09:57.136030779 +0000 UTC m=+8.816712839" Apr 13 20:09:57.488902 kubelet[2546]: I0413 20:09:57.488241 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-fxqjt" podStartSLOduration=1.7361535639999999 podStartE2EDuration="3.488225139s" podCreationTimestamp="2026-04-13 20:09:54 +0000 UTC" firstStartedPulling="2026-04-13 20:09:55.021417879 +0000 UTC m=+6.702099929" lastFinishedPulling="2026-04-13 20:09:56.773489454 +0000 UTC m=+8.454171504" observedRunningTime="2026-04-13 20:09:57.488099339 +0000 UTC m=+9.168781389" watchObservedRunningTime="2026-04-13 20:09:57.488225139 +0000 UTC m=+9.168907189" Apr 13 20:09:58.407906 kubelet[2546]: E0413 20:09:58.407672 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:09:59.865356 kubelet[2546]: E0413 20:09:59.865325 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:02.232362 sudo[1676]: pam_unix(sudo:session): session closed for user root Apr 13 20:10:02.351120 sshd[1673]: pam_unix(sshd:session): session closed for user core Apr 13 20:10:02.355052 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:10:02.358138 systemd[1]: sshd@6-172.239.193.192:22-50.85.169.122:45736.service: Deactivated successfully. Apr 13 20:10:02.362224 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:10:02.362698 systemd[1]: session-7.scope: Consumed 3.181s CPU time, 158.6M memory peak, 0B memory swap peak. Apr 13 20:10:02.363738 systemd-logind[1439]: Removed session 7. Apr 13 20:10:04.356430 systemd[1]: Created slice kubepods-besteffort-pode63279f4_a58e_436a_bb71_78b3b63b83b0.slice - libcontainer container kubepods-besteffort-pode63279f4_a58e_436a_bb71_78b3b63b83b0.slice. Apr 13 20:10:04.401797 kubelet[2546]: I0413 20:10:04.401675 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e63279f4-a58e-436a-bb71-78b3b63b83b0-tigera-ca-bundle\") pod \"calico-typha-796f9654b8-ztptq\" (UID: \"e63279f4-a58e-436a-bb71-78b3b63b83b0\") " pod="calico-system/calico-typha-796f9654b8-ztptq" Apr 13 20:10:04.401797 kubelet[2546]: I0413 20:10:04.401711 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e63279f4-a58e-436a-bb71-78b3b63b83b0-typha-certs\") pod \"calico-typha-796f9654b8-ztptq\" (UID: \"e63279f4-a58e-436a-bb71-78b3b63b83b0\") " pod="calico-system/calico-typha-796f9654b8-ztptq" Apr 13 20:10:04.401797 kubelet[2546]: I0413 20:10:04.401737 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjt72\" (UniqueName: \"kubernetes.io/projected/e63279f4-a58e-436a-bb71-78b3b63b83b0-kube-api-access-wjt72\") pod \"calico-typha-796f9654b8-ztptq\" (UID: \"e63279f4-a58e-436a-bb71-78b3b63b83b0\") " pod="calico-system/calico-typha-796f9654b8-ztptq" Apr 13 20:10:04.422398 systemd[1]: Created slice kubepods-besteffort-podf508324a_3899_4023_8f46_b2f8595c5020.slice - libcontainer container kubepods-besteffort-podf508324a_3899_4023_8f46_b2f8595c5020.slice. Apr 13 20:10:04.528739 update_engine[1440]: I20260413 20:10:04.527916 1440 update_attempter.cc:509] Updating boot flags... Apr 13 20:10:04.561662 kubelet[2546]: E0413 20:10:04.561422 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgfs5" podUID="ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9" Apr 13 20:10:04.603580 kubelet[2546]: I0413 20:10:04.603531 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-var-lib-calico\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.603580 kubelet[2546]: I0413 20:10:04.603577 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-cni-net-dir\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604224 kubelet[2546]: I0413 20:10:04.603600 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-cni-bin-dir\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604224 kubelet[2546]: I0413 20:10:04.603620 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-flexvol-driver-host\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604224 kubelet[2546]: I0413 20:10:04.603646 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f508324a-3899-4023-8f46-b2f8595c5020-node-certs\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604224 kubelet[2546]: I0413 20:10:04.603669 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwrf\" (UniqueName: \"kubernetes.io/projected/f508324a-3899-4023-8f46-b2f8595c5020-kube-api-access-spwrf\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604224 kubelet[2546]: I0413 20:10:04.603696 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-var-run-calico\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604380 kubelet[2546]: I0413 20:10:04.603721 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f508324a-3899-4023-8f46-b2f8595c5020-tigera-ca-bundle\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604380 kubelet[2546]: I0413 20:10:04.603743 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-bpffs\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604380 kubelet[2546]: I0413 20:10:04.603763 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-nodeproc\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604380 kubelet[2546]: I0413 20:10:04.603786 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-policysync\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604380 kubelet[2546]: I0413 20:10:04.603811 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-sys-fs\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604501 kubelet[2546]: I0413 20:10:04.603831 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-xtables-lock\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604501 kubelet[2546]: I0413 20:10:04.603858 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-cni-log-dir\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.604501 kubelet[2546]: I0413 20:10:04.603902 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f508324a-3899-4023-8f46-b2f8595c5020-lib-modules\") pod \"calico-node-2bjd4\" (UID: \"f508324a-3899-4023-8f46-b2f8595c5020\") " pod="calico-system/calico-node-2bjd4" Apr 13 20:10:04.639860 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2960) Apr 13 20:10:04.677851 kubelet[2546]: E0413 20:10:04.677173 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:04.679044 containerd[1455]: time="2026-04-13T20:10:04.678980709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796f9654b8-ztptq,Uid:e63279f4-a58e-436a-bb71-78b3b63b83b0,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:04.705597 kubelet[2546]: I0413 20:10:04.704706 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mr9j\" (UniqueName: \"kubernetes.io/projected/ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9-kube-api-access-6mr9j\") pod \"csi-node-driver-sgfs5\" (UID: \"ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9\") " pod="calico-system/csi-node-driver-sgfs5" Apr 13 20:10:04.705597 kubelet[2546]: I0413 20:10:04.704754 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9-varrun\") pod \"csi-node-driver-sgfs5\" (UID: \"ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9\") " pod="calico-system/csi-node-driver-sgfs5" Apr 13 20:10:04.705597 kubelet[2546]: I0413 20:10:04.704804 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9-socket-dir\") pod \"csi-node-driver-sgfs5\" (UID: \"ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9\") " pod="calico-system/csi-node-driver-sgfs5" Apr 13 20:10:04.705597 kubelet[2546]: I0413 20:10:04.704828 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9-registration-dir\") pod \"csi-node-driver-sgfs5\" (UID: \"ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9\") " pod="calico-system/csi-node-driver-sgfs5" Apr 13 20:10:04.705597 kubelet[2546]: I0413 20:10:04.704887 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9-kubelet-dir\") pod \"csi-node-driver-sgfs5\" (UID: \"ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9\") " pod="calico-system/csi-node-driver-sgfs5" Apr 13 20:10:04.706642 kubelet[2546]: E0413 20:10:04.706625 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.706834 kubelet[2546]: W0413 20:10:04.706730 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.706834 kubelet[2546]: E0413 20:10:04.706752 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.707069 kubelet[2546]: E0413 20:10:04.707058 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.707224 kubelet[2546]: W0413 20:10:04.707114 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.707224 kubelet[2546]: E0413 20:10:04.707130 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.707469 kubelet[2546]: E0413 20:10:04.707458 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.707619 kubelet[2546]: W0413 20:10:04.707532 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.707619 kubelet[2546]: E0413 20:10:04.707547 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.707935 kubelet[2546]: E0413 20:10:04.707922 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.708072 kubelet[2546]: W0413 20:10:04.707975 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.708072 kubelet[2546]: E0413 20:10:04.707988 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.708463 kubelet[2546]: E0413 20:10:04.708452 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.708538 kubelet[2546]: W0413 20:10:04.708527 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.708582 kubelet[2546]: E0413 20:10:04.708572 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.709037 kubelet[2546]: E0413 20:10:04.709026 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.709089 kubelet[2546]: W0413 20:10:04.709080 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.709154 kubelet[2546]: E0413 20:10:04.709131 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.709630 kubelet[2546]: E0413 20:10:04.709412 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.709630 kubelet[2546]: W0413 20:10:04.709603 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.709630 kubelet[2546]: E0413 20:10:04.709616 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.711656 kubelet[2546]: E0413 20:10:04.711641 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.712385 kubelet[2546]: W0413 20:10:04.711981 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.712385 kubelet[2546]: E0413 20:10:04.712001 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.712558 kubelet[2546]: E0413 20:10:04.712533 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.717653 kubelet[2546]: W0413 20:10:04.716928 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.717653 kubelet[2546]: E0413 20:10:04.716959 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.717653 kubelet[2546]: E0413 20:10:04.717369 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.717653 kubelet[2546]: W0413 20:10:04.717378 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.718369 kubelet[2546]: E0413 20:10:04.717807 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.718369 kubelet[2546]: E0413 20:10:04.718308 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.718369 kubelet[2546]: W0413 20:10:04.718316 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.718369 kubelet[2546]: E0413 20:10:04.718346 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.719773 kubelet[2546]: E0413 20:10:04.718685 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.719773 kubelet[2546]: W0413 20:10:04.718697 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.719773 kubelet[2546]: E0413 20:10:04.718707 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.721074 kubelet[2546]: E0413 20:10:04.721051 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.721074 kubelet[2546]: W0413 20:10:04.721066 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.721141 kubelet[2546]: E0413 20:10:04.721076 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.721319 kubelet[2546]: E0413 20:10:04.721301 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.721319 kubelet[2546]: W0413 20:10:04.721315 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.721371 kubelet[2546]: E0413 20:10:04.721323 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.721589 kubelet[2546]: E0413 20:10:04.721577 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.722976 kubelet[2546]: W0413 20:10:04.721628 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.722976 kubelet[2546]: E0413 20:10:04.721830 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.723119 kubelet[2546]: E0413 20:10:04.723107 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.723233 kubelet[2546]: W0413 20:10:04.723163 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.723233 kubelet[2546]: E0413 20:10:04.723177 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.728076 kubelet[2546]: E0413 20:10:04.728062 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.728161 kubelet[2546]: W0413 20:10:04.728143 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.728298 kubelet[2546]: E0413 20:10:04.728210 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.728584 kubelet[2546]: E0413 20:10:04.728565 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.728584 kubelet[2546]: W0413 20:10:04.728580 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.728843 kubelet[2546]: E0413 20:10:04.728591 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.729010 containerd[1455]: time="2026-04-13T20:10:04.728936220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:04.729322 kubelet[2546]: E0413 20:10:04.729302 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.729322 kubelet[2546]: W0413 20:10:04.729317 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.729372 kubelet[2546]: E0413 20:10:04.729326 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.729825 containerd[1455]: time="2026-04-13T20:10:04.729792769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:04.729966 containerd[1455]: time="2026-04-13T20:10:04.729934489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:04.730100 kubelet[2546]: E0413 20:10:04.730083 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.730100 kubelet[2546]: W0413 20:10:04.730096 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.730154 kubelet[2546]: E0413 20:10:04.730106 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.730380 kubelet[2546]: E0413 20:10:04.730362 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.730380 kubelet[2546]: W0413 20:10:04.730377 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.730442 kubelet[2546]: E0413 20:10:04.730385 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.730719 kubelet[2546]: E0413 20:10:04.730700 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.730976 containerd[1455]: time="2026-04-13T20:10:04.730661829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:04.731979 kubelet[2546]: W0413 20:10:04.731902 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.731979 kubelet[2546]: E0413 20:10:04.731920 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.732147 kubelet[2546]: E0413 20:10:04.732129 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.732147 kubelet[2546]: W0413 20:10:04.732137 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.732147 kubelet[2546]: E0413 20:10:04.732145 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.732493 kubelet[2546]: E0413 20:10:04.732327 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.732493 kubelet[2546]: W0413 20:10:04.732336 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.732493 kubelet[2546]: E0413 20:10:04.732344 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.732577 kubelet[2546]: E0413 20:10:04.732560 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.732577 kubelet[2546]: W0413 20:10:04.732567 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.732577 kubelet[2546]: E0413 20:10:04.732575 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.734367 kubelet[2546]: E0413 20:10:04.734010 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.734367 kubelet[2546]: W0413 20:10:04.734023 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.734367 kubelet[2546]: E0413 20:10:04.734033 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.743831 kubelet[2546]: E0413 20:10:04.738301 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.743831 kubelet[2546]: W0413 20:10:04.738313 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.743831 kubelet[2546]: E0413 20:10:04.738324 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.743831 kubelet[2546]: E0413 20:10:04.738559 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.743831 kubelet[2546]: W0413 20:10:04.738567 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.743831 kubelet[2546]: E0413 20:10:04.738576 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.743831 kubelet[2546]: E0413 20:10:04.739124 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.743831 kubelet[2546]: W0413 20:10:04.739133 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.743831 kubelet[2546]: E0413 20:10:04.739142 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.743831 kubelet[2546]: E0413 20:10:04.739400 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744197 kubelet[2546]: W0413 20:10:04.739408 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744197 kubelet[2546]: E0413 20:10:04.739417 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.744197 kubelet[2546]: E0413 20:10:04.739651 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744197 kubelet[2546]: W0413 20:10:04.739659 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744197 kubelet[2546]: E0413 20:10:04.739667 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.744197 kubelet[2546]: E0413 20:10:04.739938 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744197 kubelet[2546]: W0413 20:10:04.739947 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744197 kubelet[2546]: E0413 20:10:04.739955 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.744197 kubelet[2546]: E0413 20:10:04.740188 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744197 kubelet[2546]: W0413 20:10:04.740195 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744399 kubelet[2546]: E0413 20:10:04.740203 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.744399 kubelet[2546]: E0413 20:10:04.740433 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744399 kubelet[2546]: W0413 20:10:04.740441 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744399 kubelet[2546]: E0413 20:10:04.740449 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.744399 kubelet[2546]: E0413 20:10:04.741327 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744399 kubelet[2546]: W0413 20:10:04.741335 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744399 kubelet[2546]: E0413 20:10:04.741343 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.744399 kubelet[2546]: E0413 20:10:04.741695 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744399 kubelet[2546]: W0413 20:10:04.741703 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744399 kubelet[2546]: E0413 20:10:04.741711 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.744611 kubelet[2546]: E0413 20:10:04.741962 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744611 kubelet[2546]: W0413 20:10:04.741970 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744611 kubelet[2546]: E0413 20:10:04.741978 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.744611 kubelet[2546]: E0413 20:10:04.742224 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744611 kubelet[2546]: W0413 20:10:04.742240 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744611 kubelet[2546]: E0413 20:10:04.742248 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.744611 kubelet[2546]: E0413 20:10:04.742496 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744611 kubelet[2546]: W0413 20:10:04.742504 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744611 kubelet[2546]: E0413 20:10:04.742512 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.744611 kubelet[2546]: E0413 20:10:04.742905 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.744821 kubelet[2546]: W0413 20:10:04.742914 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.744821 kubelet[2546]: E0413 20:10:04.742922 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.749040 kubelet[2546]: E0413 20:10:04.746068 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.749040 kubelet[2546]: W0413 20:10:04.746081 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.749040 kubelet[2546]: E0413 20:10:04.746091 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.749040 kubelet[2546]: E0413 20:10:04.746342 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.749040 kubelet[2546]: W0413 20:10:04.746349 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.749040 kubelet[2546]: E0413 20:10:04.746357 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.749040 kubelet[2546]: E0413 20:10:04.747275 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.749040 kubelet[2546]: W0413 20:10:04.747284 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.749040 kubelet[2546]: E0413 20:10:04.747320 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.749040 kubelet[2546]: E0413 20:10:04.748344 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.750097 kubelet[2546]: W0413 20:10:04.748353 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.750097 kubelet[2546]: E0413 20:10:04.748362 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.750097 kubelet[2546]: E0413 20:10:04.749075 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.750097 kubelet[2546]: W0413 20:10:04.749084 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.750097 kubelet[2546]: E0413 20:10:04.749093 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.750097 kubelet[2546]: E0413 20:10:04.749323 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.750097 kubelet[2546]: W0413 20:10:04.749331 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.750097 kubelet[2546]: E0413 20:10:04.749339 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.750097 kubelet[2546]: E0413 20:10:04.750043 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.750097 kubelet[2546]: W0413 20:10:04.750052 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.750282 kubelet[2546]: E0413 20:10:04.750060 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.753740 kubelet[2546]: E0413 20:10:04.750327 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.753740 kubelet[2546]: W0413 20:10:04.750338 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.753740 kubelet[2546]: E0413 20:10:04.750346 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.753740 kubelet[2546]: E0413 20:10:04.751389 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.753740 kubelet[2546]: W0413 20:10:04.751398 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.753740 kubelet[2546]: E0413 20:10:04.751407 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.754274 kubelet[2546]: E0413 20:10:04.754253 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.754274 kubelet[2546]: W0413 20:10:04.754269 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.754359 kubelet[2546]: E0413 20:10:04.754280 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.754488 kubelet[2546]: E0413 20:10:04.754472 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.754488 kubelet[2546]: W0413 20:10:04.754486 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.754563 kubelet[2546]: E0413 20:10:04.754495 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.754740 kubelet[2546]: E0413 20:10:04.754677 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.754989 kubelet[2546]: W0413 20:10:04.754748 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.754989 kubelet[2546]: E0413 20:10:04.754758 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.757026 kubelet[2546]: E0413 20:10:04.757006 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.757026 kubelet[2546]: W0413 20:10:04.757023 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.757110 kubelet[2546]: E0413 20:10:04.757034 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.758111 kubelet[2546]: E0413 20:10:04.758065 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.758111 kubelet[2546]: W0413 20:10:04.758081 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.758111 kubelet[2546]: E0413 20:10:04.758090 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.758419 kubelet[2546]: E0413 20:10:04.758325 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.758419 kubelet[2546]: W0413 20:10:04.758335 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.758419 kubelet[2546]: E0413 20:10:04.758343 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.762189 kubelet[2546]: E0413 20:10:04.762070 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.762189 kubelet[2546]: W0413 20:10:04.762084 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.762189 kubelet[2546]: E0413 20:10:04.762094 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.762760 kubelet[2546]: E0413 20:10:04.762297 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.762760 kubelet[2546]: W0413 20:10:04.762304 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.762760 kubelet[2546]: E0413 20:10:04.762313 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.762760 kubelet[2546]: E0413 20:10:04.762541 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.762760 kubelet[2546]: W0413 20:10:04.762548 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.762760 kubelet[2546]: E0413 20:10:04.762556 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.763124 kubelet[2546]: E0413 20:10:04.762813 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.763124 kubelet[2546]: W0413 20:10:04.762821 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.763124 kubelet[2546]: E0413 20:10:04.762829 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.778028 systemd[1]: Started cri-containerd-3082b54d05098b2c5eaad37de77dba63e8e6bee56c281175ee3a0f153040e041.scope - libcontainer container 3082b54d05098b2c5eaad37de77dba63e8e6bee56c281175ee3a0f153040e041. Apr 13 20:10:04.793043 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2964) Apr 13 20:10:04.806997 kubelet[2546]: E0413 20:10:04.806646 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.806997 kubelet[2546]: W0413 20:10:04.806991 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.808323 kubelet[2546]: E0413 20:10:04.807014 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.808323 kubelet[2546]: E0413 20:10:04.808204 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.808323 kubelet[2546]: W0413 20:10:04.808235 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.808323 kubelet[2546]: E0413 20:10:04.808249 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.808612 kubelet[2546]: E0413 20:10:04.808595 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.808647 kubelet[2546]: W0413 20:10:04.808609 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.808647 kubelet[2546]: E0413 20:10:04.808639 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.810726 kubelet[2546]: E0413 20:10:04.810111 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.810726 kubelet[2546]: W0413 20:10:04.810140 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.810726 kubelet[2546]: E0413 20:10:04.810153 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.810726 kubelet[2546]: E0413 20:10:04.810434 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.810726 kubelet[2546]: W0413 20:10:04.810442 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.810726 kubelet[2546]: E0413 20:10:04.810451 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.811023 kubelet[2546]: E0413 20:10:04.810810 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.811023 kubelet[2546]: W0413 20:10:04.810819 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.811023 kubelet[2546]: E0413 20:10:04.810828 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.811336 kubelet[2546]: E0413 20:10:04.811211 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.811336 kubelet[2546]: W0413 20:10:04.811222 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.811336 kubelet[2546]: E0413 20:10:04.811231 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.811720 kubelet[2546]: E0413 20:10:04.811673 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.812823 kubelet[2546]: W0413 20:10:04.812475 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.812823 kubelet[2546]: E0413 20:10:04.812495 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.812823 kubelet[2546]: E0413 20:10:04.812791 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.812823 kubelet[2546]: W0413 20:10:04.812800 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.812823 kubelet[2546]: E0413 20:10:04.812808 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.813425 kubelet[2546]: E0413 20:10:04.813407 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.813425 kubelet[2546]: W0413 20:10:04.813422 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.813494 kubelet[2546]: E0413 20:10:04.813432 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.813830 kubelet[2546]: E0413 20:10:04.813813 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.813830 kubelet[2546]: W0413 20:10:04.813827 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.813919 kubelet[2546]: E0413 20:10:04.813836 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.814425 kubelet[2546]: E0413 20:10:04.814406 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.814425 kubelet[2546]: W0413 20:10:04.814420 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.814481 kubelet[2546]: E0413 20:10:04.814430 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.816144 kubelet[2546]: E0413 20:10:04.816126 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.816144 kubelet[2546]: W0413 20:10:04.816140 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.816211 kubelet[2546]: E0413 20:10:04.816150 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.816537 kubelet[2546]: E0413 20:10:04.816459 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.816537 kubelet[2546]: W0413 20:10:04.816475 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.816537 kubelet[2546]: E0413 20:10:04.816484 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.816745 kubelet[2546]: E0413 20:10:04.816727 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.816745 kubelet[2546]: W0413 20:10:04.816740 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.816803 kubelet[2546]: E0413 20:10:04.816748 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.818167 kubelet[2546]: E0413 20:10:04.817101 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.818167 kubelet[2546]: W0413 20:10:04.817111 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.818167 kubelet[2546]: E0413 20:10:04.817228 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.818251 kubelet[2546]: E0413 20:10:04.818188 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.818251 kubelet[2546]: W0413 20:10:04.818197 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.818251 kubelet[2546]: E0413 20:10:04.818230 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.820951 kubelet[2546]: E0413 20:10:04.820929 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.820951 kubelet[2546]: W0413 20:10:04.820946 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.821040 kubelet[2546]: E0413 20:10:04.820957 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.821319 kubelet[2546]: E0413 20:10:04.821215 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.821319 kubelet[2546]: W0413 20:10:04.821228 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.821319 kubelet[2546]: E0413 20:10:04.821236 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.824165 kubelet[2546]: E0413 20:10:04.824141 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.824165 kubelet[2546]: W0413 20:10:04.824158 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.824338 kubelet[2546]: E0413 20:10:04.824169 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.824414 kubelet[2546]: E0413 20:10:04.824395 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.824414 kubelet[2546]: W0413 20:10:04.824409 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.824582 kubelet[2546]: E0413 20:10:04.824417 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.828918 kubelet[2546]: E0413 20:10:04.826502 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.828918 kubelet[2546]: W0413 20:10:04.826515 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.828918 kubelet[2546]: E0413 20:10:04.826525 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.828918 kubelet[2546]: E0413 20:10:04.827187 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.828918 kubelet[2546]: W0413 20:10:04.827196 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.828918 kubelet[2546]: E0413 20:10:04.827204 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.828918 kubelet[2546]: E0413 20:10:04.828374 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.828918 kubelet[2546]: W0413 20:10:04.828384 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.828918 kubelet[2546]: E0413 20:10:04.828394 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.835003 kubelet[2546]: E0413 20:10:04.831443 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.835003 kubelet[2546]: W0413 20:10:04.831457 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.835003 kubelet[2546]: E0413 20:10:04.831467 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.842886 kubelet[2546]: E0413 20:10:04.841178 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:04.842886 kubelet[2546]: W0413 20:10:04.841196 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:04.842886 kubelet[2546]: E0413 20:10:04.841212 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:04.899323 containerd[1455]: time="2026-04-13T20:10:04.899286182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796f9654b8-ztptq,Uid:e63279f4-a58e-436a-bb71-78b3b63b83b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"3082b54d05098b2c5eaad37de77dba63e8e6bee56c281175ee3a0f153040e041\"" Apr 13 20:10:04.900359 kubelet[2546]: E0413 20:10:04.900339 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:04.902112 containerd[1455]: time="2026-04-13T20:10:04.902090951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:10:05.032362 containerd[1455]: time="2026-04-13T20:10:05.031956603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2bjd4,Uid:f508324a-3899-4023-8f46-b2f8595c5020,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:05.053970 containerd[1455]: time="2026-04-13T20:10:05.053825037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:05.053970 containerd[1455]: time="2026-04-13T20:10:05.053904216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:05.053970 containerd[1455]: time="2026-04-13T20:10:05.053925846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:05.054109 containerd[1455]: time="2026-04-13T20:10:05.054017756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:05.075009 systemd[1]: Started cri-containerd-fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01.scope - libcontainer container fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01. Apr 13 20:10:05.098577 containerd[1455]: time="2026-04-13T20:10:05.098522952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2bjd4,Uid:f508324a-3899-4023-8f46-b2f8595c5020,Namespace:calico-system,Attempt:0,} returns sandbox id \"fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01\"" Apr 13 20:10:05.914484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470913466.mount: Deactivated successfully. Apr 13 20:10:06.428865 kubelet[2546]: E0413 20:10:06.428474 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgfs5" podUID="ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9" Apr 13 20:10:06.516113 containerd[1455]: time="2026-04-13T20:10:06.516072158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:06.516919 containerd[1455]: time="2026-04-13T20:10:06.516814717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:10:06.517318 containerd[1455]: time="2026-04-13T20:10:06.517279068Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:06.519902 containerd[1455]: time="2026-04-13T20:10:06.519248007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:06.520559 containerd[1455]: time="2026-04-13T20:10:06.520013238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.617897077s" Apr 13 20:10:06.520559 containerd[1455]: time="2026-04-13T20:10:06.520043267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:10:06.521943 containerd[1455]: time="2026-04-13T20:10:06.521722926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:10:06.538623 containerd[1455]: time="2026-04-13T20:10:06.538588013Z" level=info msg="CreateContainer within sandbox \"3082b54d05098b2c5eaad37de77dba63e8e6bee56c281175ee3a0f153040e041\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:10:06.555114 containerd[1455]: time="2026-04-13T20:10:06.554976610Z" level=info msg="CreateContainer within sandbox \"3082b54d05098b2c5eaad37de77dba63e8e6bee56c281175ee3a0f153040e041\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ef1332aa6b3828b0c0afe20a9619eeb7b8c7a6a3bd23d31e8849c3db2c276b40\"" Apr 13 20:10:06.556912 containerd[1455]: time="2026-04-13T20:10:06.555780529Z" level=info msg="StartContainer for \"ef1332aa6b3828b0c0afe20a9619eeb7b8c7a6a3bd23d31e8849c3db2c276b40\"" Apr 13 20:10:06.597040 systemd[1]: Started cri-containerd-ef1332aa6b3828b0c0afe20a9619eeb7b8c7a6a3bd23d31e8849c3db2c276b40.scope - libcontainer container ef1332aa6b3828b0c0afe20a9619eeb7b8c7a6a3bd23d31e8849c3db2c276b40. Apr 13 20:10:06.646227 containerd[1455]: time="2026-04-13T20:10:06.645127119Z" level=info msg="StartContainer for \"ef1332aa6b3828b0c0afe20a9619eeb7b8c7a6a3bd23d31e8849c3db2c276b40\" returns successfully" Apr 13 20:10:07.133631 kubelet[2546]: E0413 20:10:07.133438 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:07.223983 kubelet[2546]: E0413 20:10:07.223863 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.223983 kubelet[2546]: W0413 20:10:07.223956 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.223983 kubelet[2546]: E0413 20:10:07.223995 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.224557 kubelet[2546]: E0413 20:10:07.224523 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.224650 kubelet[2546]: W0413 20:10:07.224555 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.224650 kubelet[2546]: E0413 20:10:07.224586 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.224938 kubelet[2546]: E0413 20:10:07.224903 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.224938 kubelet[2546]: W0413 20:10:07.224914 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.224938 kubelet[2546]: E0413 20:10:07.224945 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.225288 kubelet[2546]: E0413 20:10:07.225259 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.225288 kubelet[2546]: W0413 20:10:07.225273 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.225288 kubelet[2546]: E0413 20:10:07.225282 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.225666 kubelet[2546]: E0413 20:10:07.225629 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.225732 kubelet[2546]: W0413 20:10:07.225662 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.225732 kubelet[2546]: E0413 20:10:07.225694 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.442543 containerd[1455]: time="2026-04-13T20:10:07.441404483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:07.442543 containerd[1455]: time="2026-04-13T20:10:07.442363593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:10:07.444035 containerd[1455]: time="2026-04-13T20:10:07.443983642Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:07.446235 containerd[1455]: time="2026-04-13T20:10:07.446022432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:07.446812 containerd[1455]: time="2026-04-13T20:10:07.446778052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 924.775755ms" Apr 13 20:10:07.446862 containerd[1455]: time="2026-04-13T20:10:07.446812512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:10:07.451212 containerd[1455]: time="2026-04-13T20:10:07.450970621Z" level=info msg="CreateContainer within sandbox \"fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:10:07.462572 containerd[1455]: time="2026-04-13T20:10:07.462539100Z" level=info msg="CreateContainer within sandbox \"fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"775c6684a95722794a8246450130afe12f7e881c11be0e1d7c05a012207b22e9\"" Apr 13 20:10:07.463828 containerd[1455]: time="2026-04-13T20:10:07.463427800Z" level=info msg="StartContainer for \"775c6684a95722794a8246450130afe12f7e881c11be0e1d7c05a012207b22e9\"" Apr 13 20:10:07.491020 systemd[1]: Started cri-containerd-775c6684a95722794a8246450130afe12f7e881c11be0e1d7c05a012207b22e9.scope - libcontainer container 775c6684a95722794a8246450130afe12f7e881c11be0e1d7c05a012207b22e9. Apr 13 20:10:07.508609 kubelet[2546]: E0413 20:10:07.508342 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:07.530307 kubelet[2546]: E0413 20:10:07.530020 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.530307 kubelet[2546]: W0413 20:10:07.530041 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.530307 kubelet[2546]: E0413 20:10:07.530059 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.532898 kubelet[2546]: E0413 20:10:07.531974 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.532898 kubelet[2546]: W0413 20:10:07.531989 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.532898 kubelet[2546]: E0413 20:10:07.532005 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.534669 kubelet[2546]: E0413 20:10:07.534653 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.534669 kubelet[2546]: W0413 20:10:07.534668 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.534769 kubelet[2546]: E0413 20:10:07.534682 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.535243 kubelet[2546]: E0413 20:10:07.535229 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.535243 kubelet[2546]: W0413 20:10:07.535243 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.535300 kubelet[2546]: E0413 20:10:07.535258 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.535569 kubelet[2546]: E0413 20:10:07.535554 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.535613 kubelet[2546]: W0413 20:10:07.535569 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.535613 kubelet[2546]: E0413 20:10:07.535581 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.535834 kubelet[2546]: E0413 20:10:07.535821 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.535834 kubelet[2546]: W0413 20:10:07.535833 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.535912 kubelet[2546]: E0413 20:10:07.535844 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.536087 kubelet[2546]: E0413 20:10:07.536075 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.536124 kubelet[2546]: W0413 20:10:07.536088 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.536124 kubelet[2546]: E0413 20:10:07.536097 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.536342 kubelet[2546]: E0413 20:10:07.536330 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.536460 kubelet[2546]: W0413 20:10:07.536410 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.536460 kubelet[2546]: E0413 20:10:07.536426 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.537089 kubelet[2546]: E0413 20:10:07.537077 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.537213 kubelet[2546]: W0413 20:10:07.537153 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.537213 kubelet[2546]: E0413 20:10:07.537166 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.537578 kubelet[2546]: E0413 20:10:07.537479 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.537578 kubelet[2546]: W0413 20:10:07.537491 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.537578 kubelet[2546]: E0413 20:10:07.537500 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.537955 kubelet[2546]: E0413 20:10:07.537945 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.538070 kubelet[2546]: W0413 20:10:07.538001 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.538070 kubelet[2546]: E0413 20:10:07.538013 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.538384 kubelet[2546]: E0413 20:10:07.538292 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.538384 kubelet[2546]: W0413 20:10:07.538302 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.538384 kubelet[2546]: E0413 20:10:07.538310 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.538619 kubelet[2546]: E0413 20:10:07.538608 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.538712 kubelet[2546]: W0413 20:10:07.538657 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.538712 kubelet[2546]: E0413 20:10:07.538670 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.539182 kubelet[2546]: E0413 20:10:07.539171 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.539277 kubelet[2546]: W0413 20:10:07.539221 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.539277 kubelet[2546]: E0413 20:10:07.539234 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.539566 kubelet[2546]: E0413 20:10:07.539504 2546 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:10:07.539566 kubelet[2546]: W0413 20:10:07.539514 2546 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:10:07.539566 kubelet[2546]: E0413 20:10:07.539521 2546 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:10:07.546239 containerd[1455]: time="2026-04-13T20:10:07.546165567Z" level=info msg="StartContainer for \"775c6684a95722794a8246450130afe12f7e881c11be0e1d7c05a012207b22e9\" returns successfully" Apr 13 20:10:07.560128 systemd[1]: cri-containerd-775c6684a95722794a8246450130afe12f7e881c11be0e1d7c05a012207b22e9.scope: Deactivated successfully. Apr 13 20:10:07.580764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-775c6684a95722794a8246450130afe12f7e881c11be0e1d7c05a012207b22e9-rootfs.mount: Deactivated successfully. Apr 13 20:10:07.704211 containerd[1455]: time="2026-04-13T20:10:07.702930134Z" level=info msg="shim disconnected" id=775c6684a95722794a8246450130afe12f7e881c11be0e1d7c05a012207b22e9 namespace=k8s.io Apr 13 20:10:07.704211 containerd[1455]: time="2026-04-13T20:10:07.702981254Z" level=warning msg="cleaning up after shim disconnected" id=775c6684a95722794a8246450130afe12f7e881c11be0e1d7c05a012207b22e9 namespace=k8s.io Apr 13 20:10:07.704211 containerd[1455]: time="2026-04-13T20:10:07.702992744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:10:08.413897 kubelet[2546]: E0413 20:10:08.413460 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:08.424253 kubelet[2546]: I0413 20:10:08.424154 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-796f9654b8-ztptq" podStartSLOduration=2.804933201 podStartE2EDuration="4.424132527s" podCreationTimestamp="2026-04-13 20:10:04 +0000 UTC" firstStartedPulling="2026-04-13 20:10:04.901571051 +0000 UTC m=+16.582253101" lastFinishedPulling="2026-04-13 20:10:06.520770377 +0000 UTC m=+18.201452427" observedRunningTime="2026-04-13 20:10:07.521688171 +0000 UTC m=+19.202370221" watchObservedRunningTime="2026-04-13 20:10:08.424132527 +0000 UTC m=+20.104814577" Apr 13 20:10:08.427786 kubelet[2546]: E0413 20:10:08.427742 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgfs5" podUID="ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9" Apr 13 20:10:08.510686 kubelet[2546]: I0413 20:10:08.510642 2546 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:08.512656 kubelet[2546]: E0413 20:10:08.512196 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:08.516547 containerd[1455]: time="2026-04-13T20:10:08.516234629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:10:10.427836 kubelet[2546]: E0413 20:10:10.427795 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgfs5" podUID="ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9" Apr 13 20:10:12.427535 kubelet[2546]: E0413 20:10:12.427003 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgfs5" podUID="ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9" Apr 13 20:10:12.582934 kubelet[2546]: I0413 20:10:12.581347 2546 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:12.582934 kubelet[2546]: E0413 20:10:12.581654 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:12.810453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919932487.mount: Deactivated successfully. Apr 13 20:10:12.840910 containerd[1455]: time="2026-04-13T20:10:12.840807950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:12.841663 containerd[1455]: time="2026-04-13T20:10:12.841497670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:10:12.842139 containerd[1455]: time="2026-04-13T20:10:12.842101030Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:12.844099 containerd[1455]: time="2026-04-13T20:10:12.844077490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:12.845104 containerd[1455]: time="2026-04-13T20:10:12.844658480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.328382011s" Apr 13 20:10:12.845104 containerd[1455]: time="2026-04-13T20:10:12.844686960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:10:12.848777 containerd[1455]: time="2026-04-13T20:10:12.848744531Z" level=info msg="CreateContainer within sandbox \"fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:10:12.862676 containerd[1455]: time="2026-04-13T20:10:12.862651583Z" level=info msg="CreateContainer within sandbox \"fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"475b86e7dd70c8dd899a7a67dbf89a75f5f869832d481579f78383ac12a363b8\"" Apr 13 20:10:12.866549 containerd[1455]: time="2026-04-13T20:10:12.866528844Z" level=info msg="StartContainer for \"475b86e7dd70c8dd899a7a67dbf89a75f5f869832d481579f78383ac12a363b8\"" Apr 13 20:10:12.906028 systemd[1]: Started cri-containerd-475b86e7dd70c8dd899a7a67dbf89a75f5f869832d481579f78383ac12a363b8.scope - libcontainer container 475b86e7dd70c8dd899a7a67dbf89a75f5f869832d481579f78383ac12a363b8. Apr 13 20:10:12.937861 containerd[1455]: time="2026-04-13T20:10:12.937765236Z" level=info msg="StartContainer for \"475b86e7dd70c8dd899a7a67dbf89a75f5f869832d481579f78383ac12a363b8\" returns successfully" Apr 13 20:10:12.974581 systemd[1]: cri-containerd-475b86e7dd70c8dd899a7a67dbf89a75f5f869832d481579f78383ac12a363b8.scope: Deactivated successfully. Apr 13 20:10:13.134962 containerd[1455]: time="2026-04-13T20:10:13.134862286Z" level=info msg="shim disconnected" id=475b86e7dd70c8dd899a7a67dbf89a75f5f869832d481579f78383ac12a363b8 namespace=k8s.io Apr 13 20:10:13.134962 containerd[1455]: time="2026-04-13T20:10:13.134952786Z" level=warning msg="cleaning up after shim disconnected" id=475b86e7dd70c8dd899a7a67dbf89a75f5f869832d481579f78383ac12a363b8 namespace=k8s.io Apr 13 20:10:13.134962 containerd[1455]: time="2026-04-13T20:10:13.134980906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:10:13.525356 kubelet[2546]: E0413 20:10:13.524735 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:13.526050 containerd[1455]: time="2026-04-13T20:10:13.525903682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:10:13.810650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-475b86e7dd70c8dd899a7a67dbf89a75f5f869832d481579f78383ac12a363b8-rootfs.mount: Deactivated successfully. Apr 13 20:10:14.427399 kubelet[2546]: E0413 20:10:14.427338 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sgfs5" podUID="ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9" Apr 13 20:10:15.583511 containerd[1455]: time="2026-04-13T20:10:15.583449578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:15.584553 containerd[1455]: time="2026-04-13T20:10:15.584429928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:10:15.586426 containerd[1455]: time="2026-04-13T20:10:15.585141009Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:15.588145 containerd[1455]: time="2026-04-13T20:10:15.588121290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:15.588797 containerd[1455]: time="2026-04-13T20:10:15.588775119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.062842477s" Apr 13 20:10:15.588866 containerd[1455]: time="2026-04-13T20:10:15.588852539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:10:15.594014 containerd[1455]: time="2026-04-13T20:10:15.593962541Z" level=info msg="CreateContainer within sandbox \"fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:10:15.609414 containerd[1455]: time="2026-04-13T20:10:15.609379866Z" level=info msg="CreateContainer within sandbox \"fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3a5d7753b392776953708cc73edfa8cea5e912c2ded92dc4ade0d2ff66a252d3\"" Apr 13 20:10:15.610029 containerd[1455]: time="2026-04-13T20:10:15.610006337Z" level=info msg="StartContainer for \"3a5d7753b392776953708cc73edfa8cea5e912c2ded92dc4ade0d2ff66a252d3\"" Apr 13 20:10:15.643009 systemd[1]: Started cri-containerd-3a5d7753b392776953708cc73edfa8cea5e912c2ded92dc4ade0d2ff66a252d3.scope - libcontainer container 3a5d7753b392776953708cc73edfa8cea5e912c2ded92dc4ade0d2ff66a252d3. Apr 13 20:10:15.674797 containerd[1455]: time="2026-04-13T20:10:15.674715096Z" level=info msg="StartContainer for \"3a5d7753b392776953708cc73edfa8cea5e912c2ded92dc4ade0d2ff66a252d3\" returns successfully" Apr 13 20:10:16.221609 containerd[1455]: time="2026-04-13T20:10:16.221545778Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:10:16.225166 systemd[1]: cri-containerd-3a5d7753b392776953708cc73edfa8cea5e912c2ded92dc4ade0d2ff66a252d3.scope: Deactivated successfully. Apr 13 20:10:16.232076 kubelet[2546]: I0413 20:10:16.231887 2546 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 13 20:10:16.265857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a5d7753b392776953708cc73edfa8cea5e912c2ded92dc4ade0d2ff66a252d3-rootfs.mount: Deactivated successfully. Apr 13 20:10:16.279344 systemd[1]: Created slice kubepods-besteffort-pod878df965_5680_45b6_bea3_fb9201081031.slice - libcontainer container kubepods-besteffort-pod878df965_5680_45b6_bea3_fb9201081031.slice. Apr 13 20:10:16.280807 containerd[1455]: time="2026-04-13T20:10:16.280513219Z" level=info msg="shim disconnected" id=3a5d7753b392776953708cc73edfa8cea5e912c2ded92dc4ade0d2ff66a252d3 namespace=k8s.io Apr 13 20:10:16.282166 containerd[1455]: time="2026-04-13T20:10:16.281028039Z" level=warning msg="cleaning up after shim disconnected" id=3a5d7753b392776953708cc73edfa8cea5e912c2ded92dc4ade0d2ff66a252d3 namespace=k8s.io Apr 13 20:10:16.282166 containerd[1455]: time="2026-04-13T20:10:16.281046929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:10:16.299614 systemd[1]: Created slice kubepods-burstable-pod9c60b64c_c287_49a6_9e8f_117d46909ac0.slice - libcontainer container kubepods-burstable-pod9c60b64c_c287_49a6_9e8f_117d46909ac0.slice. Apr 13 20:10:16.306463 kubelet[2546]: I0413 20:10:16.306109 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c60b64c-c287-49a6-9e8f-117d46909ac0-config-volume\") pod \"coredns-7d764666f9-pvwlp\" (UID: \"9c60b64c-c287-49a6-9e8f-117d46909ac0\") " pod="kube-system/coredns-7d764666f9-pvwlp" Apr 13 20:10:16.306463 kubelet[2546]: I0413 20:10:16.306146 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2jl6\" (UniqueName: \"kubernetes.io/projected/9c60b64c-c287-49a6-9e8f-117d46909ac0-kube-api-access-z2jl6\") pod \"coredns-7d764666f9-pvwlp\" (UID: \"9c60b64c-c287-49a6-9e8f-117d46909ac0\") " pod="kube-system/coredns-7d764666f9-pvwlp" Apr 13 20:10:16.306463 kubelet[2546]: I0413 20:10:16.306186 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/878df965-5680-45b6-bea3-fb9201081031-nginx-config\") pod \"whisker-785cc5bd95-dmhqs\" (UID: \"878df965-5680-45b6-bea3-fb9201081031\") " pod="calico-system/whisker-785cc5bd95-dmhqs" Apr 13 20:10:16.306463 kubelet[2546]: I0413 20:10:16.306202 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/878df965-5680-45b6-bea3-fb9201081031-whisker-backend-key-pair\") pod \"whisker-785cc5bd95-dmhqs\" (UID: \"878df965-5680-45b6-bea3-fb9201081031\") " pod="calico-system/whisker-785cc5bd95-dmhqs" Apr 13 20:10:16.306463 kubelet[2546]: I0413 20:10:16.306216 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/878df965-5680-45b6-bea3-fb9201081031-whisker-ca-bundle\") pod \"whisker-785cc5bd95-dmhqs\" (UID: \"878df965-5680-45b6-bea3-fb9201081031\") " pod="calico-system/whisker-785cc5bd95-dmhqs" Apr 13 20:10:16.306704 kubelet[2546]: I0413 20:10:16.306230 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l45c\" (UniqueName: \"kubernetes.io/projected/878df965-5680-45b6-bea3-fb9201081031-kube-api-access-4l45c\") pod \"whisker-785cc5bd95-dmhqs\" (UID: \"878df965-5680-45b6-bea3-fb9201081031\") " pod="calico-system/whisker-785cc5bd95-dmhqs" Apr 13 20:10:16.320492 systemd[1]: Created slice kubepods-burstable-pod84d67424_50c0_442a_9169_b582a1cca729.slice - libcontainer container kubepods-burstable-pod84d67424_50c0_442a_9169_b582a1cca729.slice. Apr 13 20:10:16.333308 systemd[1]: Created slice kubepods-besteffort-poda7e119dc_3238_4cbb_af6c_ff92f19fcb51.slice - libcontainer container kubepods-besteffort-poda7e119dc_3238_4cbb_af6c_ff92f19fcb51.slice. Apr 13 20:10:16.343257 systemd[1]: Created slice kubepods-besteffort-pod1352fc40_7380_4f40_97a5_2db21f2695cc.slice - libcontainer container kubepods-besteffort-pod1352fc40_7380_4f40_97a5_2db21f2695cc.slice. Apr 13 20:10:16.349847 systemd[1]: Created slice kubepods-besteffort-podbd97d4d8_3ec3_43d7_ba64_c8ae0cc8d162.slice - libcontainer container kubepods-besteffort-podbd97d4d8_3ec3_43d7_ba64_c8ae0cc8d162.slice. Apr 13 20:10:16.358951 systemd[1]: Created slice kubepods-besteffort-podfae17250_1960_43f1_bcdd_744eb4b3f5bd.slice - libcontainer container kubepods-besteffort-podfae17250_1960_43f1_bcdd_744eb4b3f5bd.slice. Apr 13 20:10:16.407097 kubelet[2546]: I0413 20:10:16.407036 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1352fc40-7380-4f40-97a5-2db21f2695cc-tigera-ca-bundle\") pod \"calico-kube-controllers-7c47c8f584-8rsdp\" (UID: \"1352fc40-7380-4f40-97a5-2db21f2695cc\") " pod="calico-system/calico-kube-controllers-7c47c8f584-8rsdp" Apr 13 20:10:16.408592 kubelet[2546]: I0413 20:10:16.407346 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5ht\" (UniqueName: \"kubernetes.io/projected/a7e119dc-3238-4cbb-af6c-ff92f19fcb51-kube-api-access-kc5ht\") pod \"calico-apiserver-7b9d58c8c6-fxrn8\" (UID: \"a7e119dc-3238-4cbb-af6c-ff92f19fcb51\") " pod="calico-system/calico-apiserver-7b9d58c8c6-fxrn8" Apr 13 20:10:16.408592 kubelet[2546]: I0413 20:10:16.407381 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162-calico-apiserver-certs\") pod \"calico-apiserver-7b9d58c8c6-2hljs\" (UID: \"bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162\") " pod="calico-system/calico-apiserver-7b9d58c8c6-2hljs" Apr 13 20:10:16.408592 kubelet[2546]: I0413 20:10:16.407403 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkm6t\" (UniqueName: \"kubernetes.io/projected/bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162-kube-api-access-gkm6t\") pod \"calico-apiserver-7b9d58c8c6-2hljs\" (UID: \"bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162\") " pod="calico-system/calico-apiserver-7b9d58c8c6-2hljs" Apr 13 20:10:16.408592 kubelet[2546]: I0413 20:10:16.407425 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84d67424-50c0-442a-9169-b582a1cca729-config-volume\") pod \"coredns-7d764666f9-rwvfl\" (UID: \"84d67424-50c0-442a-9169-b582a1cca729\") " pod="kube-system/coredns-7d764666f9-rwvfl" Apr 13 20:10:16.408592 kubelet[2546]: I0413 20:10:16.407478 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fae17250-1960-43f1-bcdd-744eb4b3f5bd-config\") pod \"goldmane-9f7667bb8-wnbgx\" (UID: \"fae17250-1960-43f1-bcdd-744eb4b3f5bd\") " pod="calico-system/goldmane-9f7667bb8-wnbgx" Apr 13 20:10:16.408820 kubelet[2546]: I0413 20:10:16.407500 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fae17250-1960-43f1-bcdd-744eb4b3f5bd-goldmane-key-pair\") pod \"goldmane-9f7667bb8-wnbgx\" (UID: \"fae17250-1960-43f1-bcdd-744eb4b3f5bd\") " pod="calico-system/goldmane-9f7667bb8-wnbgx" Apr 13 20:10:16.408820 kubelet[2546]: I0413 20:10:16.407520 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmhvw\" (UniqueName: \"kubernetes.io/projected/fae17250-1960-43f1-bcdd-744eb4b3f5bd-kube-api-access-jmhvw\") pod \"goldmane-9f7667bb8-wnbgx\" (UID: \"fae17250-1960-43f1-bcdd-744eb4b3f5bd\") " pod="calico-system/goldmane-9f7667bb8-wnbgx" Apr 13 20:10:16.408820 kubelet[2546]: I0413 20:10:16.407560 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a7e119dc-3238-4cbb-af6c-ff92f19fcb51-calico-apiserver-certs\") pod \"calico-apiserver-7b9d58c8c6-fxrn8\" (UID: \"a7e119dc-3238-4cbb-af6c-ff92f19fcb51\") " pod="calico-system/calico-apiserver-7b9d58c8c6-fxrn8" Apr 13 20:10:16.408820 kubelet[2546]: I0413 20:10:16.407584 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brfjg\" (UniqueName: \"kubernetes.io/projected/1352fc40-7380-4f40-97a5-2db21f2695cc-kube-api-access-brfjg\") pod \"calico-kube-controllers-7c47c8f584-8rsdp\" (UID: \"1352fc40-7380-4f40-97a5-2db21f2695cc\") " pod="calico-system/calico-kube-controllers-7c47c8f584-8rsdp" Apr 13 20:10:16.408820 kubelet[2546]: I0413 20:10:16.407665 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fae17250-1960-43f1-bcdd-744eb4b3f5bd-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-wnbgx\" (UID: \"fae17250-1960-43f1-bcdd-744eb4b3f5bd\") " pod="calico-system/goldmane-9f7667bb8-wnbgx" Apr 13 20:10:16.408998 kubelet[2546]: I0413 20:10:16.407702 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwsgd\" (UniqueName: \"kubernetes.io/projected/84d67424-50c0-442a-9169-b582a1cca729-kube-api-access-qwsgd\") pod \"coredns-7d764666f9-rwvfl\" (UID: \"84d67424-50c0-442a-9169-b582a1cca729\") " pod="kube-system/coredns-7d764666f9-rwvfl" Apr 13 20:10:16.440449 systemd[1]: Created slice kubepods-besteffort-podce35ba45_78f2_4c5a_8951_d0c6d05d9ea9.slice - libcontainer container kubepods-besteffort-podce35ba45_78f2_4c5a_8951_d0c6d05d9ea9.slice. Apr 13 20:10:16.445953 containerd[1455]: time="2026-04-13T20:10:16.445844758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgfs5,Uid:ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:16.541593 containerd[1455]: time="2026-04-13T20:10:16.541488143Z" level=error msg="Failed to destroy network for sandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.542781 containerd[1455]: time="2026-04-13T20:10:16.542754133Z" level=error msg="encountered an error cleaning up failed sandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.542960 containerd[1455]: time="2026-04-13T20:10:16.542918773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgfs5,Uid:ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.543521 kubelet[2546]: E0413 20:10:16.543496 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.543631 kubelet[2546]: E0413 20:10:16.543613 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sgfs5" Apr 13 20:10:16.543692 kubelet[2546]: E0413 20:10:16.543678 2546 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sgfs5" Apr 13 20:10:16.543780 kubelet[2546]: E0413 20:10:16.543760 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sgfs5_calico-system(ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sgfs5_calico-system(ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sgfs5" podUID="ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9" Apr 13 20:10:16.555067 containerd[1455]: time="2026-04-13T20:10:16.555039387Z" level=info msg="CreateContainer within sandbox \"fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:10:16.567498 containerd[1455]: time="2026-04-13T20:10:16.567454832Z" level=info msg="CreateContainer within sandbox \"fcad389a5f47d96bf58cd9f229f10a753dcef0669b651dcfd59df7ce7da5ac01\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6fa8ecd0c3cf6f7ba76097a63435a4338972c3a5085549e318e9064e34d1347a\"" Apr 13 20:10:16.568915 containerd[1455]: time="2026-04-13T20:10:16.568644272Z" level=info msg="StartContainer for \"6fa8ecd0c3cf6f7ba76097a63435a4338972c3a5085549e318e9064e34d1347a\"" Apr 13 20:10:16.593263 containerd[1455]: time="2026-04-13T20:10:16.593232851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785cc5bd95-dmhqs,Uid:878df965-5680-45b6-bea3-fb9201081031,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:16.596062 systemd[1]: Started cri-containerd-6fa8ecd0c3cf6f7ba76097a63435a4338972c3a5085549e318e9064e34d1347a.scope - libcontainer container 6fa8ecd0c3cf6f7ba76097a63435a4338972c3a5085549e318e9064e34d1347a. Apr 13 20:10:16.616575 kubelet[2546]: E0413 20:10:16.613694 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:16.621344 containerd[1455]: time="2026-04-13T20:10:16.621039880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pvwlp,Uid:9c60b64c-c287-49a6-9e8f-117d46909ac0,Namespace:kube-system,Attempt:0,}" Apr 13 20:10:16.638899 kubelet[2546]: E0413 20:10:16.636525 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:16.639059 containerd[1455]: time="2026-04-13T20:10:16.638927117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rwvfl,Uid:84d67424-50c0-442a-9169-b582a1cca729,Namespace:kube-system,Attempt:0,}" Apr 13 20:10:16.640103 containerd[1455]: time="2026-04-13T20:10:16.640076797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b9d58c8c6-fxrn8,Uid:a7e119dc-3238-4cbb-af6c-ff92f19fcb51,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:16.649672 containerd[1455]: time="2026-04-13T20:10:16.649490011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47c8f584-8rsdp,Uid:1352fc40-7380-4f40-97a5-2db21f2695cc,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:16.657843 containerd[1455]: time="2026-04-13T20:10:16.657728244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b9d58c8c6-2hljs,Uid:bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:16.665600 containerd[1455]: time="2026-04-13T20:10:16.665576207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wnbgx,Uid:fae17250-1960-43f1-bcdd-744eb4b3f5bd,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:16.729495 containerd[1455]: time="2026-04-13T20:10:16.729350880Z" level=error msg="Failed to destroy network for sandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.730252 containerd[1455]: time="2026-04-13T20:10:16.730219449Z" level=error msg="encountered an error cleaning up failed sandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.730317 containerd[1455]: time="2026-04-13T20:10:16.730269330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-785cc5bd95-dmhqs,Uid:878df965-5680-45b6-bea3-fb9201081031,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.731045 kubelet[2546]: E0413 20:10:16.730447 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.731045 kubelet[2546]: E0413 20:10:16.730494 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-785cc5bd95-dmhqs" Apr 13 20:10:16.731045 kubelet[2546]: E0413 20:10:16.730513 2546 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-785cc5bd95-dmhqs" Apr 13 20:10:16.731151 kubelet[2546]: E0413 20:10:16.730555 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-785cc5bd95-dmhqs_calico-system(878df965-5680-45b6-bea3-fb9201081031)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-785cc5bd95-dmhqs_calico-system(878df965-5680-45b6-bea3-fb9201081031)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-785cc5bd95-dmhqs" podUID="878df965-5680-45b6-bea3-fb9201081031" Apr 13 20:10:16.751278 containerd[1455]: time="2026-04-13T20:10:16.751121407Z" level=info msg="StartContainer for \"6fa8ecd0c3cf6f7ba76097a63435a4338972c3a5085549e318e9064e34d1347a\" returns successfully" Apr 13 20:10:16.888835 containerd[1455]: time="2026-04-13T20:10:16.888786957Z" level=error msg="Failed to destroy network for sandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.889259 containerd[1455]: time="2026-04-13T20:10:16.889142466Z" level=error msg="encountered an error cleaning up failed sandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.889259 containerd[1455]: time="2026-04-13T20:10:16.889188346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pvwlp,Uid:9c60b64c-c287-49a6-9e8f-117d46909ac0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.889479 kubelet[2546]: E0413 20:10:16.889369 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.889479 kubelet[2546]: E0413 20:10:16.889414 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pvwlp" Apr 13 20:10:16.889479 kubelet[2546]: E0413 20:10:16.889431 2546 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-pvwlp" Apr 13 20:10:16.890142 kubelet[2546]: E0413 20:10:16.889483 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-pvwlp_kube-system(9c60b64c-c287-49a6-9e8f-117d46909ac0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-pvwlp_kube-system(9c60b64c-c287-49a6-9e8f-117d46909ac0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-pvwlp" podUID="9c60b64c-c287-49a6-9e8f-117d46909ac0" Apr 13 20:10:16.936139 containerd[1455]: time="2026-04-13T20:10:16.936075903Z" level=error msg="Failed to destroy network for sandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.936559 containerd[1455]: time="2026-04-13T20:10:16.936447424Z" level=error msg="encountered an error cleaning up failed sandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.936559 containerd[1455]: time="2026-04-13T20:10:16.936495864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b9d58c8c6-fxrn8,Uid:a7e119dc-3238-4cbb-af6c-ff92f19fcb51,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.937033 kubelet[2546]: E0413 20:10:16.936948 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.937033 kubelet[2546]: E0413 20:10:16.936998 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b9d58c8c6-fxrn8" Apr 13 20:10:16.937033 kubelet[2546]: E0413 20:10:16.937015 2546 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b9d58c8c6-fxrn8" Apr 13 20:10:16.937896 kubelet[2546]: E0413 20:10:16.937115 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b9d58c8c6-fxrn8_calico-system(a7e119dc-3238-4cbb-af6c-ff92f19fcb51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b9d58c8c6-fxrn8_calico-system(a7e119dc-3238-4cbb-af6c-ff92f19fcb51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7b9d58c8c6-fxrn8" podUID="a7e119dc-3238-4cbb-af6c-ff92f19fcb51" Apr 13 20:10:16.963991 containerd[1455]: time="2026-04-13T20:10:16.963914793Z" level=error msg="Failed to destroy network for sandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.965575 containerd[1455]: time="2026-04-13T20:10:16.965372034Z" level=error msg="encountered an error cleaning up failed sandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.965575 containerd[1455]: time="2026-04-13T20:10:16.965448394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47c8f584-8rsdp,Uid:1352fc40-7380-4f40-97a5-2db21f2695cc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.967228 kubelet[2546]: E0413 20:10:16.966575 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.967228 kubelet[2546]: E0413 20:10:16.966624 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47c8f584-8rsdp" Apr 13 20:10:16.967228 kubelet[2546]: E0413 20:10:16.966641 2546 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47c8f584-8rsdp" Apr 13 20:10:16.967346 kubelet[2546]: E0413 20:10:16.966683 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47c8f584-8rsdp_calico-system(1352fc40-7380-4f40-97a5-2db21f2695cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47c8f584-8rsdp_calico-system(1352fc40-7380-4f40-97a5-2db21f2695cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47c8f584-8rsdp" podUID="1352fc40-7380-4f40-97a5-2db21f2695cc" Apr 13 20:10:16.970904 containerd[1455]: time="2026-04-13T20:10:16.969424446Z" level=error msg="Failed to destroy network for sandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.970904 containerd[1455]: time="2026-04-13T20:10:16.969779585Z" level=error msg="encountered an error cleaning up failed sandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.970904 containerd[1455]: time="2026-04-13T20:10:16.969826245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rwvfl,Uid:84d67424-50c0-442a-9169-b582a1cca729,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.971257 kubelet[2546]: E0413 20:10:16.971106 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.971257 kubelet[2546]: E0413 20:10:16.971145 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-rwvfl" Apr 13 20:10:16.971257 kubelet[2546]: E0413 20:10:16.971166 2546 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-rwvfl" Apr 13 20:10:16.971344 kubelet[2546]: E0413 20:10:16.971204 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-rwvfl_kube-system(84d67424-50c0-442a-9169-b582a1cca729)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-rwvfl_kube-system(84d67424-50c0-442a-9169-b582a1cca729)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-rwvfl" podUID="84d67424-50c0-442a-9169-b582a1cca729" Apr 13 20:10:16.977077 containerd[1455]: time="2026-04-13T20:10:16.975812228Z" level=error msg="Failed to destroy network for sandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.977077 containerd[1455]: time="2026-04-13T20:10:16.976190557Z" level=error msg="encountered an error cleaning up failed sandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.977077 containerd[1455]: time="2026-04-13T20:10:16.976230337Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wnbgx,Uid:fae17250-1960-43f1-bcdd-744eb4b3f5bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.977238 kubelet[2546]: E0413 20:10:16.976845 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.977238 kubelet[2546]: E0413 20:10:16.976932 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wnbgx" Apr 13 20:10:16.977238 kubelet[2546]: E0413 20:10:16.976949 2546 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-wnbgx" Apr 13 20:10:16.977374 kubelet[2546]: E0413 20:10:16.977012 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-wnbgx_calico-system(fae17250-1960-43f1-bcdd-744eb4b3f5bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-wnbgx_calico-system(fae17250-1960-43f1-bcdd-744eb4b3f5bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-wnbgx" podUID="fae17250-1960-43f1-bcdd-744eb4b3f5bd" Apr 13 20:10:16.981934 containerd[1455]: time="2026-04-13T20:10:16.981899850Z" level=error msg="Failed to destroy network for sandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.982679 containerd[1455]: time="2026-04-13T20:10:16.982647950Z" level=error msg="encountered an error cleaning up failed sandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.982829 containerd[1455]: time="2026-04-13T20:10:16.982799160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b9d58c8c6-2hljs,Uid:bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.983543 kubelet[2546]: E0413 20:10:16.983502 2546 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:10:16.983613 kubelet[2546]: E0413 20:10:16.983554 2546 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b9d58c8c6-2hljs" Apr 13 20:10:16.983613 kubelet[2546]: E0413 20:10:16.983570 2546 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b9d58c8c6-2hljs" Apr 13 20:10:16.983666 kubelet[2546]: E0413 20:10:16.983604 2546 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b9d58c8c6-2hljs_calico-system(bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b9d58c8c6-2hljs_calico-system(bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7b9d58c8c6-2hljs" podUID="bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162" Apr 13 20:10:17.549335 kubelet[2546]: I0413 20:10:17.549198 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:17.550861 containerd[1455]: time="2026-04-13T20:10:17.549889554Z" level=info msg="StopPodSandbox for \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\"" Apr 13 20:10:17.550861 containerd[1455]: time="2026-04-13T20:10:17.550079584Z" level=info msg="Ensure that sandbox b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02 in task-service has been cleanup successfully" Apr 13 20:10:17.560012 kubelet[2546]: I0413 20:10:17.559989 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:17.561761 containerd[1455]: time="2026-04-13T20:10:17.560976479Z" level=info msg="StopPodSandbox for \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\"" Apr 13 20:10:17.561761 containerd[1455]: time="2026-04-13T20:10:17.561149128Z" level=info msg="Ensure that sandbox 36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb in task-service has been cleanup successfully" Apr 13 20:10:17.565062 kubelet[2546]: I0413 20:10:17.564009 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:17.565145 containerd[1455]: time="2026-04-13T20:10:17.564434630Z" level=info msg="StopPodSandbox for \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\"" Apr 13 20:10:17.565145 containerd[1455]: time="2026-04-13T20:10:17.564608900Z" level=info msg="Ensure that sandbox 31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0 in task-service has been cleanup successfully" Apr 13 20:10:17.570780 kubelet[2546]: I0413 20:10:17.570739 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:17.573042 containerd[1455]: time="2026-04-13T20:10:17.573012573Z" level=info msg="StopPodSandbox for \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\"" Apr 13 20:10:17.575446 containerd[1455]: time="2026-04-13T20:10:17.574647194Z" level=info msg="Ensure that sandbox c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da in task-service has been cleanup successfully" Apr 13 20:10:17.581818 kubelet[2546]: I0413 20:10:17.581771 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:17.583478 kubelet[2546]: I0413 20:10:17.582979 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:17.584593 containerd[1455]: time="2026-04-13T20:10:17.584399407Z" level=info msg="StopPodSandbox for \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\"" Apr 13 20:10:17.584593 containerd[1455]: time="2026-04-13T20:10:17.584561748Z" level=info msg="Ensure that sandbox aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c in task-service has been cleanup successfully" Apr 13 20:10:17.593987 containerd[1455]: time="2026-04-13T20:10:17.593174751Z" level=info msg="StopPodSandbox for \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\"" Apr 13 20:10:17.593987 containerd[1455]: time="2026-04-13T20:10:17.593315761Z" level=info msg="Ensure that sandbox 5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555 in task-service has been cleanup successfully" Apr 13 20:10:17.596053 kubelet[2546]: I0413 20:10:17.595988 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:17.597675 containerd[1455]: time="2026-04-13T20:10:17.597650653Z" level=info msg="StopPodSandbox for \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\"" Apr 13 20:10:17.597789 containerd[1455]: time="2026-04-13T20:10:17.597769863Z" level=info msg="Ensure that sandbox beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4 in task-service has been cleanup successfully" Apr 13 20:10:17.607178 kubelet[2546]: I0413 20:10:17.603173 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-2bjd4" podStartSLOduration=2.158136784 podStartE2EDuration="13.603163425s" podCreationTimestamp="2026-04-13 20:10:04 +0000 UTC" firstStartedPulling="2026-04-13 20:10:05.100170092 +0000 UTC m=+16.780852142" lastFinishedPulling="2026-04-13 20:10:16.545196733 +0000 UTC m=+28.225878783" observedRunningTime="2026-04-13 20:10:17.576956645 +0000 UTC m=+29.257638735" watchObservedRunningTime="2026-04-13 20:10:17.603163425 +0000 UTC m=+29.283845475" Apr 13 20:10:17.611085 kubelet[2546]: I0413 20:10:17.611023 2546 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:17.612480 containerd[1455]: time="2026-04-13T20:10:17.612313459Z" level=info msg="StopPodSandbox for \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\"" Apr 13 20:10:17.613233 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02-shm.mount: Deactivated successfully. Apr 13 20:10:17.613339 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb-shm.mount: Deactivated successfully. Apr 13 20:10:17.613434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555-shm.mount: Deactivated successfully. Apr 13 20:10:17.614612 containerd[1455]: time="2026-04-13T20:10:17.613764850Z" level=info msg="Ensure that sandbox 0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248 in task-service has been cleanup successfully" Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.797 [INFO][3762] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.797 [INFO][3762] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" iface="eth0" netns="/var/run/netns/cni-eb309a6b-aaba-9a54-033c-d95437fec8f1" Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.797 [INFO][3762] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" iface="eth0" netns="/var/run/netns/cni-eb309a6b-aaba-9a54-033c-d95437fec8f1" Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.798 [INFO][3762] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" iface="eth0" netns="/var/run/netns/cni-eb309a6b-aaba-9a54-033c-d95437fec8f1" Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.798 [INFO][3762] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.798 [INFO][3762] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.904 [INFO][3872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" HandleID="k8s-pod-network.31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.904 [INFO][3872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.904 [INFO][3872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.921 [WARNING][3872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" HandleID="k8s-pod-network.31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.921 [INFO][3872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" HandleID="k8s-pod-network.31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.923 [INFO][3872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.945926 containerd[1455]: 2026-04-13 20:10:17.935 [INFO][3762] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:17.951911 systemd[1]: run-netns-cni\x2deb309a6b\x2daaba\x2d9a54\x2d033c\x2dd95437fec8f1.mount: Deactivated successfully. Apr 13 20:10:17.954253 containerd[1455]: time="2026-04-13T20:10:17.953925045Z" level=info msg="TearDown network for sandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\" successfully" Apr 13 20:10:17.954253 containerd[1455]: time="2026-04-13T20:10:17.953956405Z" level=info msg="StopPodSandbox for \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\" returns successfully" Apr 13 20:10:17.956915 containerd[1455]: time="2026-04-13T20:10:17.956774256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wnbgx,Uid:fae17250-1960-43f1-bcdd-744eb4b3f5bd,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.842 [INFO][3753] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.842 [INFO][3753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" iface="eth0" netns="/var/run/netns/cni-460d8448-2176-2eff-7b4e-3912905d74d3" Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.843 [INFO][3753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" iface="eth0" netns="/var/run/netns/cni-460d8448-2176-2eff-7b4e-3912905d74d3" Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.845 [INFO][3753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" iface="eth0" netns="/var/run/netns/cni-460d8448-2176-2eff-7b4e-3912905d74d3" Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.845 [INFO][3753] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.845 [INFO][3753] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.959 [INFO][3885] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" HandleID="k8s-pod-network.36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.960 [INFO][3885] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.961 [INFO][3885] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.970 [WARNING][3885] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" HandleID="k8s-pod-network.36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.970 [INFO][3885] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" HandleID="k8s-pod-network.36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.972 [INFO][3885] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.990164 containerd[1455]: 2026-04-13 20:10:17.980 [INFO][3753] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:17.991948 containerd[1455]: time="2026-04-13T20:10:17.991919990Z" level=info msg="TearDown network for sandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\" successfully" Apr 13 20:10:17.995628 containerd[1455]: time="2026-04-13T20:10:17.993967141Z" level=info msg="StopPodSandbox for \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\" returns successfully" Apr 13 20:10:17.999257 kubelet[2546]: E0413 20:10:17.997953 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:17.998064 systemd[1]: run-netns-cni\x2d460d8448\x2d2176\x2d2eff\x2d7b4e\x2d3912905d74d3.mount: Deactivated successfully. Apr 13 20:10:18.001304 containerd[1455]: time="2026-04-13T20:10:18.001278333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pvwlp,Uid:9c60b64c-c287-49a6-9e8f-117d46909ac0,Namespace:kube-system,Attempt:1,}" Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.865 [INFO][3813] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.865 [INFO][3813] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" iface="eth0" netns="/var/run/netns/cni-024ba707-3e6e-21b7-98a9-8f7252158d97" Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.866 [INFO][3813] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" iface="eth0" netns="/var/run/netns/cni-024ba707-3e6e-21b7-98a9-8f7252158d97" Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.870 [INFO][3813] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" iface="eth0" netns="/var/run/netns/cni-024ba707-3e6e-21b7-98a9-8f7252158d97" Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.870 [INFO][3813] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.871 [INFO][3813] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.974 [INFO][3893] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" HandleID="k8s-pod-network.5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Workload="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.974 [INFO][3893] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.974 [INFO][3893] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.982 [WARNING][3893] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" HandleID="k8s-pod-network.5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Workload="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.982 [INFO][3893] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" HandleID="k8s-pod-network.5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Workload="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:17.984 [INFO][3893] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.021828 containerd[1455]: 2026-04-13 20:10:18.007 [INFO][3813] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:18.022639 containerd[1455]: time="2026-04-13T20:10:18.022601733Z" level=info msg="TearDown network for sandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\" successfully" Apr 13 20:10:18.023811 containerd[1455]: time="2026-04-13T20:10:18.023754504Z" level=info msg="StopPodSandbox for \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\" returns successfully" Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:17.801 [INFO][3826] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:17.806 [INFO][3826] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" iface="eth0" netns="/var/run/netns/cni-03790329-4d2b-49f4-e54c-dd5a450daa5b" Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:17.818 [INFO][3826] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" iface="eth0" netns="/var/run/netns/cni-03790329-4d2b-49f4-e54c-dd5a450daa5b" Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:17.820 [INFO][3826] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" iface="eth0" netns="/var/run/netns/cni-03790329-4d2b-49f4-e54c-dd5a450daa5b" Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:17.820 [INFO][3826] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:17.820 [INFO][3826] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:17.963 [INFO][3878] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" HandleID="k8s-pod-network.beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:17.964 [INFO][3878] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:17.997 [INFO][3878] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:18.011 [WARNING][3878] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" HandleID="k8s-pod-network.beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:18.011 [INFO][3878] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" HandleID="k8s-pod-network.beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:18.013 [INFO][3878] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.029148 containerd[1455]: 2026-04-13 20:10:18.019 [INFO][3826] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:18.030515 containerd[1455]: time="2026-04-13T20:10:18.029809816Z" level=info msg="TearDown network for sandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\" successfully" Apr 13 20:10:18.030515 containerd[1455]: time="2026-04-13T20:10:18.029830466Z" level=info msg="StopPodSandbox for \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\" returns successfully" Apr 13 20:10:18.031332 containerd[1455]: time="2026-04-13T20:10:18.031292166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47c8f584-8rsdp,Uid:1352fc40-7380-4f40-97a5-2db21f2695cc,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.757 [INFO][3815] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.757 [INFO][3815] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" iface="eth0" netns="/var/run/netns/cni-b9aba0aa-e133-ad48-9c2b-5ee99ef0e27b" Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.758 [INFO][3815] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" iface="eth0" netns="/var/run/netns/cni-b9aba0aa-e133-ad48-9c2b-5ee99ef0e27b" Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.759 [INFO][3815] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" iface="eth0" netns="/var/run/netns/cni-b9aba0aa-e133-ad48-9c2b-5ee99ef0e27b" Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.759 [INFO][3815] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.759 [INFO][3815] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.961 [INFO][3852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" HandleID="k8s-pod-network.aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.961 [INFO][3852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.984 [INFO][3852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.994 [WARNING][3852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" HandleID="k8s-pod-network.aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.995 [INFO][3852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" HandleID="k8s-pod-network.aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:17.997 [INFO][3852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.032239 containerd[1455]: 2026-04-13 20:10:18.020 [INFO][3815] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:18.033922 containerd[1455]: time="2026-04-13T20:10:18.033899028Z" level=info msg="TearDown network for sandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\" successfully" Apr 13 20:10:18.034061 containerd[1455]: time="2026-04-13T20:10:18.033999518Z" level=info msg="StopPodSandbox for \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\" returns successfully" Apr 13 20:10:18.042441 containerd[1455]: time="2026-04-13T20:10:18.042406091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b9d58c8c6-2hljs,Uid:bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:17.769 [INFO][3802] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:17.769 [INFO][3802] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" iface="eth0" netns="/var/run/netns/cni-6c5908e8-2334-eb35-f880-0b07b6c8ec0e" Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:17.770 [INFO][3802] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" iface="eth0" netns="/var/run/netns/cni-6c5908e8-2334-eb35-f880-0b07b6c8ec0e" Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:17.773 [INFO][3802] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" iface="eth0" netns="/var/run/netns/cni-6c5908e8-2334-eb35-f880-0b07b6c8ec0e" Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:17.773 [INFO][3802] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:17.773 [INFO][3802] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:17.967 [INFO][3863] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" HandleID="k8s-pod-network.0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:17.967 [INFO][3863] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:18.013 [INFO][3863] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:18.030 [WARNING][3863] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" HandleID="k8s-pod-network.0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:18.030 [INFO][3863] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" HandleID="k8s-pod-network.0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:18.038 [INFO][3863] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.055759 containerd[1455]: 2026-04-13 20:10:18.047 [INFO][3802] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:18.056577 containerd[1455]: time="2026-04-13T20:10:18.056443657Z" level=info msg="TearDown network for sandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\" successfully" Apr 13 20:10:18.056577 containerd[1455]: time="2026-04-13T20:10:18.056464037Z" level=info msg="StopPodSandbox for \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\" returns successfully" Apr 13 20:10:18.058925 containerd[1455]: time="2026-04-13T20:10:18.058773429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgfs5,Uid:ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:17.748 [INFO][3737] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:17.751 [INFO][3737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" iface="eth0" netns="/var/run/netns/cni-ba0e348c-b3af-76f9-36e1-00c77f84a404" Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:17.752 [INFO][3737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" iface="eth0" netns="/var/run/netns/cni-ba0e348c-b3af-76f9-36e1-00c77f84a404" Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:17.753 [INFO][3737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" iface="eth0" netns="/var/run/netns/cni-ba0e348c-b3af-76f9-36e1-00c77f84a404" Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:17.753 [INFO][3737] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:17.753 [INFO][3737] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:17.969 [INFO][3848] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" HandleID="k8s-pod-network.b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:17.969 [INFO][3848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:18.038 [INFO][3848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:18.048 [WARNING][3848] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" HandleID="k8s-pod-network.b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:18.048 [INFO][3848] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" HandleID="k8s-pod-network.b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:18.049 [INFO][3848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.066125 containerd[1455]: 2026-04-13 20:10:18.056 [INFO][3737] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:18.066721 containerd[1455]: time="2026-04-13T20:10:18.066436962Z" level=info msg="TearDown network for sandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\" successfully" Apr 13 20:10:18.066721 containerd[1455]: time="2026-04-13T20:10:18.066457741Z" level=info msg="StopPodSandbox for \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\" returns successfully" Apr 13 20:10:18.068696 kubelet[2546]: E0413 20:10:18.068349 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:18.069487 containerd[1455]: time="2026-04-13T20:10:18.069315063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rwvfl,Uid:84d67424-50c0-442a-9169-b582a1cca729,Namespace:kube-system,Attempt:1,}" Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:17.838 [INFO][3783] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:17.838 [INFO][3783] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" iface="eth0" netns="/var/run/netns/cni-24de0f6c-f150-129b-e222-d7ef1e3d2267" Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:17.840 [INFO][3783] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" iface="eth0" netns="/var/run/netns/cni-24de0f6c-f150-129b-e222-d7ef1e3d2267" Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:17.844 [INFO][3783] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" iface="eth0" netns="/var/run/netns/cni-24de0f6c-f150-129b-e222-d7ef1e3d2267" Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:17.844 [INFO][3783] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:17.844 [INFO][3783] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:18.008 [INFO][3884] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" HandleID="k8s-pod-network.c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:18.008 [INFO][3884] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:18.049 [INFO][3884] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:18.060 [WARNING][3884] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" HandleID="k8s-pod-network.c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:18.060 [INFO][3884] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" HandleID="k8s-pod-network.c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:18.062 [INFO][3884] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.085900 containerd[1455]: 2026-04-13 20:10:18.073 [INFO][3783] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:18.086598 containerd[1455]: time="2026-04-13T20:10:18.086459840Z" level=info msg="TearDown network for sandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\" successfully" Apr 13 20:10:18.086598 containerd[1455]: time="2026-04-13T20:10:18.086485810Z" level=info msg="StopPodSandbox for \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\" returns successfully" Apr 13 20:10:18.088499 containerd[1455]: time="2026-04-13T20:10:18.088454621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b9d58c8c6-fxrn8,Uid:a7e119dc-3238-4cbb-af6c-ff92f19fcb51,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:18.121609 kubelet[2546]: I0413 20:10:18.121086 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/878df965-5680-45b6-bea3-fb9201081031-kube-api-access-4l45c\" (UniqueName: \"kubernetes.io/projected/878df965-5680-45b6-bea3-fb9201081031-kube-api-access-4l45c\") pod \"878df965-5680-45b6-bea3-fb9201081031\" (UID: \"878df965-5680-45b6-bea3-fb9201081031\") " Apr 13 20:10:18.121609 kubelet[2546]: I0413 20:10:18.121152 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/878df965-5680-45b6-bea3-fb9201081031-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/878df965-5680-45b6-bea3-fb9201081031-whisker-backend-key-pair\") pod \"878df965-5680-45b6-bea3-fb9201081031\" (UID: \"878df965-5680-45b6-bea3-fb9201081031\") " Apr 13 20:10:18.121609 kubelet[2546]: I0413 20:10:18.121199 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/878df965-5680-45b6-bea3-fb9201081031-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/878df965-5680-45b6-bea3-fb9201081031-whisker-ca-bundle\") pod \"878df965-5680-45b6-bea3-fb9201081031\" (UID: \"878df965-5680-45b6-bea3-fb9201081031\") " Apr 13 20:10:18.125662 kubelet[2546]: I0413 20:10:18.125299 2546 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/878df965-5680-45b6-bea3-fb9201081031-whisker-ca-bundle" pod "878df965-5680-45b6-bea3-fb9201081031" (UID: "878df965-5680-45b6-bea3-fb9201081031"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:10:18.126717 kubelet[2546]: I0413 20:10:18.126654 2546 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/878df965-5680-45b6-bea3-fb9201081031-nginx-config\" (UniqueName: \"kubernetes.io/configmap/878df965-5680-45b6-bea3-fb9201081031-nginx-config\") pod \"878df965-5680-45b6-bea3-fb9201081031\" (UID: \"878df965-5680-45b6-bea3-fb9201081031\") " Apr 13 20:10:18.126815 kubelet[2546]: I0413 20:10:18.126798 2546 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/878df965-5680-45b6-bea3-fb9201081031-whisker-ca-bundle\") on node \"172-239-193-192\" DevicePath \"\"" Apr 13 20:10:18.129864 kubelet[2546]: I0413 20:10:18.128677 2546 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/878df965-5680-45b6-bea3-fb9201081031-nginx-config" pod "878df965-5680-45b6-bea3-fb9201081031" (UID: "878df965-5680-45b6-bea3-fb9201081031"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:10:18.137487 kubelet[2546]: I0413 20:10:18.137448 2546 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878df965-5680-45b6-bea3-fb9201081031-whisker-backend-key-pair" pod "878df965-5680-45b6-bea3-fb9201081031" (UID: "878df965-5680-45b6-bea3-fb9201081031"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:10:18.140958 kubelet[2546]: I0413 20:10:18.140413 2546 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878df965-5680-45b6-bea3-fb9201081031-kube-api-access-4l45c" pod "878df965-5680-45b6-bea3-fb9201081031" (UID: "878df965-5680-45b6-bea3-fb9201081031"). InnerVolumeSpecName "kube-api-access-4l45c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:10:18.228971 kubelet[2546]: I0413 20:10:18.227784 2546 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/878df965-5680-45b6-bea3-fb9201081031-whisker-backend-key-pair\") on node \"172-239-193-192\" DevicePath \"\"" Apr 13 20:10:18.228971 kubelet[2546]: I0413 20:10:18.227818 2546 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/878df965-5680-45b6-bea3-fb9201081031-nginx-config\") on node \"172-239-193-192\" DevicePath \"\"" Apr 13 20:10:18.228971 kubelet[2546]: I0413 20:10:18.227829 2546 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4l45c\" (UniqueName: \"kubernetes.io/projected/878df965-5680-45b6-bea3-fb9201081031-kube-api-access-4l45c\") on node \"172-239-193-192\" DevicePath \"\"" Apr 13 20:10:18.347984 systemd-networkd[1379]: cali5393636397f: Link UP Apr 13 20:10:18.349030 systemd-networkd[1379]: cali5393636397f: Gained carrier Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.107 [ERROR][3924] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.142 [INFO][3924] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0 coredns-7d764666f9- kube-system 9c60b64c-c287-49a6-9e8f-117d46909ac0 945 0 2026-04-13 20:09:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-193-192 coredns-7d764666f9-pvwlp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5393636397f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Namespace="kube-system" Pod="coredns-7d764666f9-pvwlp" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.143 [INFO][3924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Namespace="kube-system" Pod="coredns-7d764666f9-pvwlp" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.207 [INFO][3982] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" HandleID="k8s-pod-network.5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.222 [INFO][3982] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" HandleID="k8s-pod-network.5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7c90), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-193-192", "pod":"coredns-7d764666f9-pvwlp", "timestamp":"2026-04-13 20:10:18.207403283 +0000 UTC"}, Hostname:"172-239-193-192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00027d600)} Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.222 [INFO][3982] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.222 [INFO][3982] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.222 [INFO][3982] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-192' Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.225 [INFO][3982] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" host="172-239-193-192" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.244 [INFO][3982] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-192" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.254 [INFO][3982] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.257 [INFO][3982] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.262 [INFO][3982] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.262 [INFO][3982] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" host="172-239-193-192" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.264 [INFO][3982] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.276 [INFO][3982] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" host="172-239-193-192" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.283 [INFO][3982] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.129/26] block=192.168.122.128/26 handle="k8s-pod-network.5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" host="172-239-193-192" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.283 [INFO][3982] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.129/26] handle="k8s-pod-network.5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" host="172-239-193-192" Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.283 [INFO][3982] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.399458 containerd[1455]: 2026-04-13 20:10:18.283 [INFO][3982] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.129/26] IPv6=[] ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" HandleID="k8s-pod-network.5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:18.401006 containerd[1455]: 2026-04-13 20:10:18.304 [INFO][3924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Namespace="kube-system" Pod="coredns-7d764666f9-pvwlp" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9c60b64c-c287-49a6-9e8f-117d46909ac0", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"", Pod:"coredns-7d764666f9-pvwlp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5393636397f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.401006 containerd[1455]: 2026-04-13 20:10:18.304 [INFO][3924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.129/32] ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Namespace="kube-system" Pod="coredns-7d764666f9-pvwlp" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:18.401006 containerd[1455]: 2026-04-13 20:10:18.304 [INFO][3924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5393636397f ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Namespace="kube-system" Pod="coredns-7d764666f9-pvwlp" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:18.401006 containerd[1455]: 2026-04-13 20:10:18.355 [INFO][3924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Namespace="kube-system" Pod="coredns-7d764666f9-pvwlp" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:18.401006 containerd[1455]: 2026-04-13 20:10:18.356 [INFO][3924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Namespace="kube-system" Pod="coredns-7d764666f9-pvwlp" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9c60b64c-c287-49a6-9e8f-117d46909ac0", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c", Pod:"coredns-7d764666f9-pvwlp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5393636397f", MAC:"b6:64:ff:f7:b4:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.401006 containerd[1455]: 2026-04-13 20:10:18.382 [INFO][3924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c" Namespace="kube-system" Pod="coredns-7d764666f9-pvwlp" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:18.426446 systemd-networkd[1379]: calie1e3360de88: Link UP Apr 13 20:10:18.460778 systemd[1]: Removed slice kubepods-besteffort-pod878df965_5680_45b6_bea3_fb9201081031.slice - libcontainer container kubepods-besteffort-pod878df965_5680_45b6_bea3_fb9201081031.slice. Apr 13 20:10:18.470092 systemd-networkd[1379]: calie1e3360de88: Gained carrier Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.094 [ERROR][3911] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.111 [INFO][3911] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0 goldmane-9f7667bb8- calico-system fae17250-1960-43f1-bcdd-744eb4b3f5bd 942 0 2026-04-13 20:10:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-193-192 goldmane-9f7667bb8-wnbgx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie1e3360de88 [] [] }} ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Namespace="calico-system" Pod="goldmane-9f7667bb8-wnbgx" WorkloadEndpoint="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.111 [INFO][3911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Namespace="calico-system" Pod="goldmane-9f7667bb8-wnbgx" WorkloadEndpoint="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.280 [INFO][3969] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" HandleID="k8s-pod-network.77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.315 [INFO][3969] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" HandleID="k8s-pod-network.77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f8080), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-192", "pod":"goldmane-9f7667bb8-wnbgx", "timestamp":"2026-04-13 20:10:18.280357005 +0000 UTC"}, Hostname:"172-239-193-192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00025f340)} Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.315 [INFO][3969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.322 [INFO][3969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.322 [INFO][3969] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-192' Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.343 [INFO][3969] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" host="172-239-193-192" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.364 [INFO][3969] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-192" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.373 [INFO][3969] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.380 [INFO][3969] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.388 [INFO][3969] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.389 [INFO][3969] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" host="172-239-193-192" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.397 [INFO][3969] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.404 [INFO][3969] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" host="172-239-193-192" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.412 [INFO][3969] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.130/26] block=192.168.122.128/26 handle="k8s-pod-network.77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" host="172-239-193-192" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.412 [INFO][3969] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.130/26] handle="k8s-pod-network.77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" host="172-239-193-192" Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.413 [INFO][3969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.535812 containerd[1455]: 2026-04-13 20:10:18.413 [INFO][3969] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.130/26] IPv6=[] ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" HandleID="k8s-pod-network.77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:18.536326 containerd[1455]: 2026-04-13 20:10:18.422 [INFO][3911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Namespace="calico-system" Pod="goldmane-9f7667bb8-wnbgx" WorkloadEndpoint="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"fae17250-1960-43f1-bcdd-744eb4b3f5bd", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"", Pod:"goldmane-9f7667bb8-wnbgx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1e3360de88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.536326 containerd[1455]: 2026-04-13 20:10:18.422 [INFO][3911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.130/32] ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Namespace="calico-system" Pod="goldmane-9f7667bb8-wnbgx" WorkloadEndpoint="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:18.536326 containerd[1455]: 2026-04-13 20:10:18.422 [INFO][3911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1e3360de88 ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Namespace="calico-system" Pod="goldmane-9f7667bb8-wnbgx" WorkloadEndpoint="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:18.536326 containerd[1455]: 2026-04-13 20:10:18.493 [INFO][3911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Namespace="calico-system" Pod="goldmane-9f7667bb8-wnbgx" WorkloadEndpoint="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:18.536326 containerd[1455]: 2026-04-13 20:10:18.497 [INFO][3911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Namespace="calico-system" Pod="goldmane-9f7667bb8-wnbgx" WorkloadEndpoint="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"fae17250-1960-43f1-bcdd-744eb4b3f5bd", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a", Pod:"goldmane-9f7667bb8-wnbgx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1e3360de88", MAC:"0e:5f:b2:0d:99:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.536326 containerd[1455]: 2026-04-13 20:10:18.523 [INFO][3911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a" Namespace="calico-system" Pod="goldmane-9f7667bb8-wnbgx" WorkloadEndpoint="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:18.539174 containerd[1455]: time="2026-04-13T20:10:18.537161786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:18.539174 containerd[1455]: time="2026-04-13T20:10:18.537230576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:18.539174 containerd[1455]: time="2026-04-13T20:10:18.537244796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:18.539174 containerd[1455]: time="2026-04-13T20:10:18.537360116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:18.555666 systemd-networkd[1379]: cali9d6eec9c2ac: Link UP Apr 13 20:10:18.556943 systemd-networkd[1379]: cali9d6eec9c2ac: Gained carrier Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.157 [ERROR][3935] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.175 [INFO][3935] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0 calico-kube-controllers-7c47c8f584- calico-system 1352fc40-7380-4f40-97a5-2db21f2695cc 943 0 2026-04-13 20:10:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c47c8f584 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-193-192 calico-kube-controllers-7c47c8f584-8rsdp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9d6eec9c2ac [] [] }} ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Namespace="calico-system" Pod="calico-kube-controllers-7c47c8f584-8rsdp" WorkloadEndpoint="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.175 [INFO][3935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Namespace="calico-system" Pod="calico-kube-controllers-7c47c8f584-8rsdp" WorkloadEndpoint="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.303 [INFO][4014] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" HandleID="k8s-pod-network.562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.366 [INFO][4014] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" HandleID="k8s-pod-network.562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e690), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-192", "pod":"calico-kube-controllers-7c47c8f584-8rsdp", "timestamp":"2026-04-13 20:10:18.303860435 +0000 UTC"}, Hostname:"172-239-193-192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000439b80)} Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.366 [INFO][4014] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.415 [INFO][4014] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.415 [INFO][4014] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-192' Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.432 [INFO][4014] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" host="172-239-193-192" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.478 [INFO][4014] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-192" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.492 [INFO][4014] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.495 [INFO][4014] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.503 [INFO][4014] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.504 [INFO][4014] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" host="172-239-193-192" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.511 [INFO][4014] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75 Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.530 [INFO][4014] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" host="172-239-193-192" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.537 [INFO][4014] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.131/26] block=192.168.122.128/26 handle="k8s-pod-network.562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" host="172-239-193-192" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.537 [INFO][4014] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.131/26] handle="k8s-pod-network.562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" host="172-239-193-192" Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.537 [INFO][4014] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.599901 containerd[1455]: 2026-04-13 20:10:18.537 [INFO][4014] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.131/26] IPv6=[] ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" HandleID="k8s-pod-network.562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:18.601599 containerd[1455]: 2026-04-13 20:10:18.548 [INFO][3935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Namespace="calico-system" Pod="calico-kube-controllers-7c47c8f584-8rsdp" WorkloadEndpoint="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0", GenerateName:"calico-kube-controllers-7c47c8f584-", Namespace:"calico-system", SelfLink:"", UID:"1352fc40-7380-4f40-97a5-2db21f2695cc", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c47c8f584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"", Pod:"calico-kube-controllers-7c47c8f584-8rsdp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9d6eec9c2ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.601599 containerd[1455]: 2026-04-13 20:10:18.548 [INFO][3935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.131/32] ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Namespace="calico-system" Pod="calico-kube-controllers-7c47c8f584-8rsdp" WorkloadEndpoint="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:18.601599 containerd[1455]: 2026-04-13 20:10:18.548 [INFO][3935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d6eec9c2ac ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Namespace="calico-system" Pod="calico-kube-controllers-7c47c8f584-8rsdp" WorkloadEndpoint="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:18.601599 containerd[1455]: 2026-04-13 20:10:18.557 [INFO][3935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Namespace="calico-system" Pod="calico-kube-controllers-7c47c8f584-8rsdp" WorkloadEndpoint="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:18.601599 containerd[1455]: 2026-04-13 20:10:18.564 [INFO][3935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Namespace="calico-system" Pod="calico-kube-controllers-7c47c8f584-8rsdp" WorkloadEndpoint="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0", GenerateName:"calico-kube-controllers-7c47c8f584-", Namespace:"calico-system", SelfLink:"", UID:"1352fc40-7380-4f40-97a5-2db21f2695cc", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c47c8f584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75", Pod:"calico-kube-controllers-7c47c8f584-8rsdp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9d6eec9c2ac", MAC:"36:1f:53:7f:38:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.601599 containerd[1455]: 2026-04-13 20:10:18.589 [INFO][3935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75" Namespace="calico-system" Pod="calico-kube-controllers-7c47c8f584-8rsdp" WorkloadEndpoint="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:18.616676 systemd[1]: run-netns-cni\x2db9aba0aa\x2de133\x2dad48\x2d9c2b\x2d5ee99ef0e27b.mount: Deactivated successfully. Apr 13 20:10:18.617148 systemd[1]: run-netns-cni\x2d03790329\x2d4d2b\x2d49f4\x2de54c\x2ddd5a450daa5b.mount: Deactivated successfully. Apr 13 20:10:18.617251 systemd[1]: run-netns-cni\x2d24de0f6c\x2df150\x2d129b\x2de222\x2dd7ef1e3d2267.mount: Deactivated successfully. Apr 13 20:10:18.617342 systemd[1]: run-netns-cni\x2dba0e348c\x2db3af\x2d76f9\x2d36e1\x2d00c77f84a404.mount: Deactivated successfully. Apr 13 20:10:18.617434 systemd[1]: run-netns-cni\x2d024ba707\x2d3e6e\x2d21b7\x2d98a9\x2d8f7252158d97.mount: Deactivated successfully. Apr 13 20:10:18.617530 systemd[1]: run-netns-cni\x2d6c5908e8\x2d2334\x2deb35\x2df880\x2d0b07b6c8ec0e.mount: Deactivated successfully. Apr 13 20:10:18.617631 systemd[1]: var-lib-kubelet-pods-878df965\x2d5680\x2d45b6\x2dbea3\x2dfb9201081031-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4l45c.mount: Deactivated successfully. Apr 13 20:10:18.617729 systemd[1]: var-lib-kubelet-pods-878df965\x2d5680\x2d45b6\x2dbea3\x2dfb9201081031-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:10:18.635060 systemd[1]: Started cri-containerd-5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c.scope - libcontainer container 5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c. Apr 13 20:10:18.690398 systemd-networkd[1379]: calif47d518ecdd: Link UP Apr 13 20:10:18.692668 systemd-networkd[1379]: calif47d518ecdd: Gained carrier Apr 13 20:10:18.712832 containerd[1455]: time="2026-04-13T20:10:18.712524922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:18.712832 containerd[1455]: time="2026-04-13T20:10:18.712594242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:18.712832 containerd[1455]: time="2026-04-13T20:10:18.712607322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:18.712832 containerd[1455]: time="2026-04-13T20:10:18.712691152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.241 [ERROR][3945] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.295 [INFO][3945] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0 calico-apiserver-7b9d58c8c6- calico-system bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162 940 0 2026-04-13 20:10:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b9d58c8c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-193-192 calico-apiserver-7b9d58c8c6-2hljs eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif47d518ecdd [] [] }} ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-2hljs" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.295 [INFO][3945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-2hljs" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.473 [INFO][4047] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" HandleID="k8s-pod-network.ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.501 [INFO][4047] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" HandleID="k8s-pod-network.ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f9280), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-192", "pod":"calico-apiserver-7b9d58c8c6-2hljs", "timestamp":"2026-04-13 20:10:18.473163379 +0000 UTC"}, Hostname:"172-239-193-192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004de160)} Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.502 [INFO][4047] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.538 [INFO][4047] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.538 [INFO][4047] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-192' Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.544 [INFO][4047] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" host="172-239-193-192" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.567 [INFO][4047] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-192" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.595 [INFO][4047] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.601 [INFO][4047] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.610 [INFO][4047] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.610 [INFO][4047] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" host="172-239-193-192" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.630 [INFO][4047] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92 Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.636 [INFO][4047] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" host="172-239-193-192" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.666 [INFO][4047] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.132/26] block=192.168.122.128/26 handle="k8s-pod-network.ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" host="172-239-193-192" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.668 [INFO][4047] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.132/26] handle="k8s-pod-network.ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" host="172-239-193-192" Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.669 [INFO][4047] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.751212 containerd[1455]: 2026-04-13 20:10:18.669 [INFO][4047] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.132/26] IPv6=[] ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" HandleID="k8s-pod-network.ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:18.751714 containerd[1455]: 2026-04-13 20:10:18.682 [INFO][3945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-2hljs" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0", GenerateName:"calico-apiserver-7b9d58c8c6-", Namespace:"calico-system", SelfLink:"", UID:"bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b9d58c8c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"", Pod:"calico-apiserver-7b9d58c8c6-2hljs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif47d518ecdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.751714 containerd[1455]: 2026-04-13 20:10:18.682 [INFO][3945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.132/32] ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-2hljs" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:18.751714 containerd[1455]: 2026-04-13 20:10:18.682 [INFO][3945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif47d518ecdd ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-2hljs" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:18.751714 containerd[1455]: 2026-04-13 20:10:18.695 [INFO][3945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-2hljs" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:18.751714 containerd[1455]: 2026-04-13 20:10:18.698 [INFO][3945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-2hljs" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0", GenerateName:"calico-apiserver-7b9d58c8c6-", Namespace:"calico-system", SelfLink:"", UID:"bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b9d58c8c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92", Pod:"calico-apiserver-7b9d58c8c6-2hljs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif47d518ecdd", MAC:"de:de:12:dd:36:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.751714 containerd[1455]: 2026-04-13 20:10:18.742 [INFO][3945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-2hljs" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:18.767012 containerd[1455]: time="2026-04-13T20:10:18.763498525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:18.767012 containerd[1455]: time="2026-04-13T20:10:18.763676575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:18.767012 containerd[1455]: time="2026-04-13T20:10:18.763687885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:18.767585 containerd[1455]: time="2026-04-13T20:10:18.767331226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:18.771010 systemd[1]: Started cri-containerd-77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a.scope - libcontainer container 77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a. Apr 13 20:10:18.794377 systemd[1]: Created slice kubepods-besteffort-podb3bdefa9_c8ec_44ce_adcf_e2699994c10f.slice - libcontainer container kubepods-besteffort-podb3bdefa9_c8ec_44ce_adcf_e2699994c10f.slice. Apr 13 20:10:18.810929 containerd[1455]: time="2026-04-13T20:10:18.810031465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:18.811052 containerd[1455]: time="2026-04-13T20:10:18.810090825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:18.811052 containerd[1455]: time="2026-04-13T20:10:18.810102055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:18.811052 containerd[1455]: time="2026-04-13T20:10:18.810181005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:18.837349 kubelet[2546]: I0413 20:10:18.837300 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b3bdefa9-c8ec-44ce-adcf-e2699994c10f-whisker-backend-key-pair\") pod \"whisker-6b94c657f6-hprcx\" (UID: \"b3bdefa9-c8ec-44ce-adcf-e2699994c10f\") " pod="calico-system/whisker-6b94c657f6-hprcx" Apr 13 20:10:18.837725 kubelet[2546]: I0413 20:10:18.837386 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mkb8\" (UniqueName: \"kubernetes.io/projected/b3bdefa9-c8ec-44ce-adcf-e2699994c10f-kube-api-access-2mkb8\") pod \"whisker-6b94c657f6-hprcx\" (UID: \"b3bdefa9-c8ec-44ce-adcf-e2699994c10f\") " pod="calico-system/whisker-6b94c657f6-hprcx" Apr 13 20:10:18.837725 kubelet[2546]: I0413 20:10:18.837437 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3bdefa9-c8ec-44ce-adcf-e2699994c10f-whisker-ca-bundle\") pod \"whisker-6b94c657f6-hprcx\" (UID: \"b3bdefa9-c8ec-44ce-adcf-e2699994c10f\") " pod="calico-system/whisker-6b94c657f6-hprcx" Apr 13 20:10:18.837725 kubelet[2546]: I0413 20:10:18.837455 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b3bdefa9-c8ec-44ce-adcf-e2699994c10f-nginx-config\") pod \"whisker-6b94c657f6-hprcx\" (UID: \"b3bdefa9-c8ec-44ce-adcf-e2699994c10f\") " pod="calico-system/whisker-6b94c657f6-hprcx" Apr 13 20:10:18.850225 systemd[1]: Started cri-containerd-ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92.scope - libcontainer container ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92. Apr 13 20:10:18.886406 systemd[1]: Started cri-containerd-562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75.scope - libcontainer container 562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75. Apr 13 20:10:18.897942 containerd[1455]: time="2026-04-13T20:10:18.897648643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-pvwlp,Uid:9c60b64c-c287-49a6-9e8f-117d46909ac0,Namespace:kube-system,Attempt:1,} returns sandbox id \"5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c\"" Apr 13 20:10:18.905926 kubelet[2546]: E0413 20:10:18.902526 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:18.916918 containerd[1455]: time="2026-04-13T20:10:18.916197761Z" level=info msg="CreateContainer within sandbox \"5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:10:18.917221 systemd-networkd[1379]: cali337f2feebcb: Link UP Apr 13 20:10:18.919081 systemd-networkd[1379]: cali337f2feebcb: Gained carrier Apr 13 20:10:18.942037 containerd[1455]: time="2026-04-13T20:10:18.942001303Z" level=info msg="CreateContainer within sandbox \"5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8bfae4e09f49d59f651fbd5330d1d1a7061dc8297cc336e4980b23a910cab60e\"" Apr 13 20:10:18.943434 containerd[1455]: time="2026-04-13T20:10:18.943397853Z" level=info msg="StartContainer for \"8bfae4e09f49d59f651fbd5330d1d1a7061dc8297cc336e4980b23a910cab60e\"" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.302 [ERROR][3976] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.345 [INFO][3976] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0 calico-apiserver-7b9d58c8c6- calico-system a7e119dc-3238-4cbb-af6c-ff92f19fcb51 944 0 2026-04-13 20:10:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b9d58c8c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-193-192 calico-apiserver-7b9d58c8c6-fxrn8 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali337f2feebcb [] [] }} ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-fxrn8" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.345 [INFO][3976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-fxrn8" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.505 [INFO][4072] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" HandleID="k8s-pod-network.4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.526 [INFO][4072] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" HandleID="k8s-pod-network.4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123e90), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-192", "pod":"calico-apiserver-7b9d58c8c6-fxrn8", "timestamp":"2026-04-13 20:10:18.505673683 +0000 UTC"}, Hostname:"172-239-193-192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001898c0)} Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.526 [INFO][4072] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.668 [INFO][4072] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.668 [INFO][4072] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-192' Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.695 [INFO][4072] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" host="172-239-193-192" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.717 [INFO][4072] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-192" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.750 [INFO][4072] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.770 [INFO][4072] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.804 [INFO][4072] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.804 [INFO][4072] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" host="172-239-193-192" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.821 [INFO][4072] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.835 [INFO][4072] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" host="172-239-193-192" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.868 [INFO][4072] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.133/26] block=192.168.122.128/26 handle="k8s-pod-network.4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" host="172-239-193-192" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.868 [INFO][4072] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.133/26] handle="k8s-pod-network.4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" host="172-239-193-192" Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.875 [INFO][4072] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.970295 containerd[1455]: 2026-04-13 20:10:18.879 [INFO][4072] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.133/26] IPv6=[] ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" HandleID="k8s-pod-network.4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:18.970830 containerd[1455]: 2026-04-13 20:10:18.907 [INFO][3976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-fxrn8" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0", GenerateName:"calico-apiserver-7b9d58c8c6-", Namespace:"calico-system", SelfLink:"", UID:"a7e119dc-3238-4cbb-af6c-ff92f19fcb51", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b9d58c8c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"", Pod:"calico-apiserver-7b9d58c8c6-fxrn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali337f2feebcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.970830 containerd[1455]: 2026-04-13 20:10:18.907 [INFO][3976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.133/32] ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-fxrn8" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:18.970830 containerd[1455]: 2026-04-13 20:10:18.907 [INFO][3976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali337f2feebcb ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-fxrn8" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:18.970830 containerd[1455]: 2026-04-13 20:10:18.926 [INFO][3976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-fxrn8" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:18.970830 containerd[1455]: 2026-04-13 20:10:18.928 [INFO][3976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-fxrn8" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0", GenerateName:"calico-apiserver-7b9d58c8c6-", Namespace:"calico-system", SelfLink:"", UID:"a7e119dc-3238-4cbb-af6c-ff92f19fcb51", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b9d58c8c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d", Pod:"calico-apiserver-7b9d58c8c6-fxrn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali337f2feebcb", MAC:"36:c3:63:58:d1:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.970830 containerd[1455]: 2026-04-13 20:10:18.964 [INFO][3976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d" Namespace="calico-system" Pod="calico-apiserver-7b9d58c8c6-fxrn8" WorkloadEndpoint="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:18.991801 systemd-networkd[1379]: cali63b9667d366: Link UP Apr 13 20:10:18.995307 systemd-networkd[1379]: cali63b9667d366: Gained carrier Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.265 [ERROR][3957] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.325 [INFO][3957] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--192-k8s-csi--node--driver--sgfs5-eth0 csi-node-driver- calico-system ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9 941 0 2026-04-13 20:10:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-193-192 csi-node-driver-sgfs5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali63b9667d366 [] [] }} ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Namespace="calico-system" Pod="csi-node-driver-sgfs5" WorkloadEndpoint="172--239--193--192-k8s-csi--node--driver--sgfs5-" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.325 [INFO][3957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Namespace="calico-system" Pod="csi-node-driver-sgfs5" WorkloadEndpoint="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.567 [INFO][4061] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" HandleID="k8s-pod-network.817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.604 [INFO][4061] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" HandleID="k8s-pod-network.817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123770), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-192", "pod":"csi-node-driver-sgfs5", "timestamp":"2026-04-13 20:10:18.567301479 +0000 UTC"}, Hostname:"172-239-193-192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004662c0)} Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.604 [INFO][4061] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.879 [INFO][4061] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.879 [INFO][4061] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-192' Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.886 [INFO][4061] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" host="172-239-193-192" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.897 [INFO][4061] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-192" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.915 [INFO][4061] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.925 [INFO][4061] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.935 [INFO][4061] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.935 [INFO][4061] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" host="172-239-193-192" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.945 [INFO][4061] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380 Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.956 [INFO][4061] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" host="172-239-193-192" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.975 [INFO][4061] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.134/26] block=192.168.122.128/26 handle="k8s-pod-network.817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" host="172-239-193-192" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.975 [INFO][4061] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.134/26] handle="k8s-pod-network.817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" host="172-239-193-192" Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.976 [INFO][4061] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:19.029572 containerd[1455]: 2026-04-13 20:10:18.976 [INFO][4061] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.134/26] IPv6=[] ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" HandleID="k8s-pod-network.817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:19.030134 containerd[1455]: 2026-04-13 20:10:18.984 [INFO][3957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Namespace="calico-system" Pod="csi-node-driver-sgfs5" WorkloadEndpoint="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-csi--node--driver--sgfs5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"", Pod:"csi-node-driver-sgfs5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63b9667d366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:19.030134 containerd[1455]: 2026-04-13 20:10:18.985 [INFO][3957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.134/32] ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Namespace="calico-system" Pod="csi-node-driver-sgfs5" WorkloadEndpoint="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:19.030134 containerd[1455]: 2026-04-13 20:10:18.985 [INFO][3957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63b9667d366 ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Namespace="calico-system" Pod="csi-node-driver-sgfs5" WorkloadEndpoint="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:19.030134 containerd[1455]: 2026-04-13 20:10:19.002 [INFO][3957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Namespace="calico-system" Pod="csi-node-driver-sgfs5" WorkloadEndpoint="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:19.030134 containerd[1455]: 2026-04-13 20:10:19.006 [INFO][3957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Namespace="calico-system" Pod="csi-node-driver-sgfs5" WorkloadEndpoint="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-csi--node--driver--sgfs5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380", Pod:"csi-node-driver-sgfs5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63b9667d366", MAC:"2e:7b:34:8a:73:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:19.030134 containerd[1455]: 2026-04-13 20:10:19.022 [INFO][3957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380" Namespace="calico-system" Pod="csi-node-driver-sgfs5" WorkloadEndpoint="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:19.037060 systemd[1]: Started cri-containerd-8bfae4e09f49d59f651fbd5330d1d1a7061dc8297cc336e4980b23a910cab60e.scope - libcontainer container 8bfae4e09f49d59f651fbd5330d1d1a7061dc8297cc336e4980b23a910cab60e. Apr 13 20:10:19.053192 containerd[1455]: time="2026-04-13T20:10:19.052836663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:19.054426 containerd[1455]: time="2026-04-13T20:10:19.052925163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:19.054426 containerd[1455]: time="2026-04-13T20:10:19.054404233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:19.054630 containerd[1455]: time="2026-04-13T20:10:19.054487653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:19.079802 systemd-networkd[1379]: cali450bce1eaec: Link UP Apr 13 20:10:19.081280 systemd-networkd[1379]: cali450bce1eaec: Gained carrier Apr 13 20:10:19.100690 containerd[1455]: time="2026-04-13T20:10:19.100548505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b94c657f6-hprcx,Uid:b3bdefa9-c8ec-44ce-adcf-e2699994c10f,Namespace:calico-system,Attempt:0,}" Apr 13 20:10:19.109265 systemd[1]: Started cri-containerd-4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d.scope - libcontainer container 4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d. Apr 13 20:10:19.130056 containerd[1455]: time="2026-04-13T20:10:19.128659088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:19.130056 containerd[1455]: time="2026-04-13T20:10:19.128704558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:19.130056 containerd[1455]: time="2026-04-13T20:10:19.128728578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:19.130056 containerd[1455]: time="2026-04-13T20:10:19.128819289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:18.274 [ERROR][3980] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:18.328 [INFO][3980] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0 coredns-7d764666f9- kube-system 84d67424-50c0-442a-9169-b582a1cca729 939 0 2026-04-13 20:09:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-193-192 coredns-7d764666f9-rwvfl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali450bce1eaec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Namespace="kube-system" Pod="coredns-7d764666f9-rwvfl" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:18.328 [INFO][3980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Namespace="kube-system" Pod="coredns-7d764666f9-rwvfl" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:18.571 [INFO][4060] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" HandleID="k8s-pod-network.f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:18.605 [INFO][4060] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" HandleID="k8s-pod-network.f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7e80), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-193-192", "pod":"coredns-7d764666f9-rwvfl", "timestamp":"2026-04-13 20:10:18.571134631 +0000 UTC"}, Hostname:"172-239-193-192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000195600)} Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:18.605 [INFO][4060] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:18.975 [INFO][4060] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:18.975 [INFO][4060] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-192' Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:18.991 [INFO][4060] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" host="172-239-193-192" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.009 [INFO][4060] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-192" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.019 [INFO][4060] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.024 [INFO][4060] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.032 [INFO][4060] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.032 [INFO][4060] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" host="172-239-193-192" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.035 [INFO][4060] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1 Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.044 [INFO][4060] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" host="172-239-193-192" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.057 [INFO][4060] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.135/26] block=192.168.122.128/26 handle="k8s-pod-network.f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" host="172-239-193-192" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.057 [INFO][4060] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.135/26] handle="k8s-pod-network.f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" host="172-239-193-192" Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.057 [INFO][4060] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:19.133184 containerd[1455]: 2026-04-13 20:10:19.057 [INFO][4060] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.135/26] IPv6=[] ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" HandleID="k8s-pod-network.f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:19.133701 containerd[1455]: 2026-04-13 20:10:19.062 [INFO][3980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Namespace="kube-system" Pod="coredns-7d764666f9-rwvfl" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"84d67424-50c0-442a-9169-b582a1cca729", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"", Pod:"coredns-7d764666f9-rwvfl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali450bce1eaec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:19.133701 containerd[1455]: 2026-04-13 20:10:19.062 [INFO][3980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.135/32] ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Namespace="kube-system" Pod="coredns-7d764666f9-rwvfl" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:19.133701 containerd[1455]: 2026-04-13 20:10:19.062 [INFO][3980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali450bce1eaec ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Namespace="kube-system" Pod="coredns-7d764666f9-rwvfl" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:19.133701 containerd[1455]: 2026-04-13 20:10:19.083 [INFO][3980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Namespace="kube-system" Pod="coredns-7d764666f9-rwvfl" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:19.133701 containerd[1455]: 2026-04-13 20:10:19.089 [INFO][3980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Namespace="kube-system" Pod="coredns-7d764666f9-rwvfl" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"84d67424-50c0-442a-9169-b582a1cca729", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1", Pod:"coredns-7d764666f9-rwvfl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali450bce1eaec", MAC:"42:5e:71:3d:cf:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:19.133701 containerd[1455]: 2026-04-13 20:10:19.123 [INFO][3980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1" Namespace="kube-system" Pod="coredns-7d764666f9-rwvfl" WorkloadEndpoint="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:19.183259 containerd[1455]: time="2026-04-13T20:10:19.183018384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-wnbgx,Uid:fae17250-1960-43f1-bcdd-744eb4b3f5bd,Namespace:calico-system,Attempt:1,} returns sandbox id \"77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a\"" Apr 13 20:10:19.194891 containerd[1455]: time="2026-04-13T20:10:19.194694629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:10:19.197020 systemd[1]: Started cri-containerd-817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380.scope - libcontainer container 817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380. Apr 13 20:10:19.200174 containerd[1455]: time="2026-04-13T20:10:19.200149662Z" level=info msg="StartContainer for \"8bfae4e09f49d59f651fbd5330d1d1a7061dc8297cc336e4980b23a910cab60e\" returns successfully" Apr 13 20:10:19.248641 containerd[1455]: time="2026-04-13T20:10:19.248603894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b9d58c8c6-2hljs,Uid:bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162,Namespace:calico-system,Attempt:1,} returns sandbox id \"ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92\"" Apr 13 20:10:19.275840 containerd[1455]: time="2026-04-13T20:10:19.275451657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:19.275840 containerd[1455]: time="2026-04-13T20:10:19.275519447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:19.275840 containerd[1455]: time="2026-04-13T20:10:19.275532737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:19.275840 containerd[1455]: time="2026-04-13T20:10:19.275629517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:19.297484 systemd[1]: Started cri-containerd-f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1.scope - libcontainer container f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1. Apr 13 20:10:19.450490 containerd[1455]: time="2026-04-13T20:10:19.450420880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rwvfl,Uid:84d67424-50c0-442a-9169-b582a1cca729,Namespace:kube-system,Attempt:1,} returns sandbox id \"f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1\"" Apr 13 20:10:19.457115 kubelet[2546]: E0413 20:10:19.456748 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:19.463116 containerd[1455]: time="2026-04-13T20:10:19.463089696Z" level=info msg="CreateContainer within sandbox \"f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:10:19.491908 containerd[1455]: time="2026-04-13T20:10:19.491172879Z" level=info msg="CreateContainer within sandbox \"f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d81f919566a4b744d8eedb281c02ad10ce37bbb179508e8064239813fcc67781\"" Apr 13 20:10:19.495473 containerd[1455]: time="2026-04-13T20:10:19.492834159Z" level=info msg="StartContainer for \"d81f919566a4b744d8eedb281c02ad10ce37bbb179508e8064239813fcc67781\"" Apr 13 20:10:19.495274 systemd-networkd[1379]: cali2ee45c32c7b: Link UP Apr 13 20:10:19.502944 systemd-networkd[1379]: cali2ee45c32c7b: Gained carrier Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.308 [ERROR][4442] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.335 [INFO][4442] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0 whisker-6b94c657f6- calico-system b3bdefa9-c8ec-44ce-adcf-e2699994c10f 977 0 2026-04-13 20:10:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b94c657f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-193-192 whisker-6b94c657f6-hprcx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2ee45c32c7b [] [] }} ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Namespace="calico-system" Pod="whisker-6b94c657f6-hprcx" WorkloadEndpoint="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.335 [INFO][4442] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Namespace="calico-system" Pod="whisker-6b94c657f6-hprcx" WorkloadEndpoint="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.413 [INFO][4499] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" HandleID="k8s-pod-network.4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Workload="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.429 [INFO][4499] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" HandleID="k8s-pod-network.4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Workload="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fd1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-193-192", "pod":"whisker-6b94c657f6-hprcx", "timestamp":"2026-04-13 20:10:19.413659902 +0000 UTC"}, Hostname:"172-239-193-192", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003f91e0)} Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.429 [INFO][4499] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.429 [INFO][4499] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.429 [INFO][4499] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-193-192' Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.432 [INFO][4499] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" host="172-239-193-192" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.446 [INFO][4499] ipam/ipam.go 409: Looking up existing affinities for host host="172-239-193-192" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.450 [INFO][4499] ipam/ipam.go 526: Trying affinity for 192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.452 [INFO][4499] ipam/ipam.go 160: Attempting to load block cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.455 [INFO][4499] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="172-239-193-192" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.455 [INFO][4499] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" host="172-239-193-192" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.457 [INFO][4499] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.462 [INFO][4499] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" host="172-239-193-192" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.473 [INFO][4499] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.122.136/26] block=192.168.122.128/26 handle="k8s-pod-network.4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" host="172-239-193-192" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.473 [INFO][4499] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.122.136/26] handle="k8s-pod-network.4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" host="172-239-193-192" Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.473 [INFO][4499] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:19.544585 containerd[1455]: 2026-04-13 20:10:19.473 [INFO][4499] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.122.136/26] IPv6=[] ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" HandleID="k8s-pod-network.4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Workload="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0" Apr 13 20:10:19.545519 containerd[1455]: 2026-04-13 20:10:19.487 [INFO][4442] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Namespace="calico-system" Pod="whisker-6b94c657f6-hprcx" WorkloadEndpoint="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0", GenerateName:"whisker-6b94c657f6-", Namespace:"calico-system", SelfLink:"", UID:"b3bdefa9-c8ec-44ce-adcf-e2699994c10f", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b94c657f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"", Pod:"whisker-6b94c657f6-hprcx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2ee45c32c7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:19.545519 containerd[1455]: 2026-04-13 20:10:19.487 [INFO][4442] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.136/32] ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Namespace="calico-system" Pod="whisker-6b94c657f6-hprcx" WorkloadEndpoint="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0" Apr 13 20:10:19.545519 containerd[1455]: 2026-04-13 20:10:19.487 [INFO][4442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ee45c32c7b ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Namespace="calico-system" Pod="whisker-6b94c657f6-hprcx" WorkloadEndpoint="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0" Apr 13 20:10:19.545519 containerd[1455]: 2026-04-13 20:10:19.504 [INFO][4442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Namespace="calico-system" Pod="whisker-6b94c657f6-hprcx" WorkloadEndpoint="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0" Apr 13 20:10:19.545519 containerd[1455]: 2026-04-13 20:10:19.509 [INFO][4442] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Namespace="calico-system" Pod="whisker-6b94c657f6-hprcx" WorkloadEndpoint="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0", GenerateName:"whisker-6b94c657f6-", Namespace:"calico-system", SelfLink:"", UID:"b3bdefa9-c8ec-44ce-adcf-e2699994c10f", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b94c657f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b", Pod:"whisker-6b94c657f6-hprcx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2ee45c32c7b", MAC:"82:5c:dc:02:4f:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:19.545519 containerd[1455]: 2026-04-13 20:10:19.538 [INFO][4442] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b" Namespace="calico-system" Pod="whisker-6b94c657f6-hprcx" WorkloadEndpoint="172--239--193--192-k8s-whisker--6b94c657f6--hprcx-eth0" Apr 13 20:10:19.570834 containerd[1455]: time="2026-04-13T20:10:19.569626765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47c8f584-8rsdp,Uid:1352fc40-7380-4f40-97a5-2db21f2695cc,Namespace:calico-system,Attempt:1,} returns sandbox id \"562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75\"" Apr 13 20:10:19.580033 systemd[1]: Started cri-containerd-d81f919566a4b744d8eedb281c02ad10ce37bbb179508e8064239813fcc67781.scope - libcontainer container d81f919566a4b744d8eedb281c02ad10ce37bbb179508e8064239813fcc67781. Apr 13 20:10:19.596154 containerd[1455]: time="2026-04-13T20:10:19.595067228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:19.596154 containerd[1455]: time="2026-04-13T20:10:19.595239938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:19.596154 containerd[1455]: time="2026-04-13T20:10:19.595270148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:19.596478 containerd[1455]: time="2026-04-13T20:10:19.596337508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:19.654019 systemd[1]: Started cri-containerd-4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b.scope - libcontainer container 4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b. Apr 13 20:10:19.669264 containerd[1455]: time="2026-04-13T20:10:19.669227583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sgfs5,Uid:ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9,Namespace:calico-system,Attempt:1,} returns sandbox id \"817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380\"" Apr 13 20:10:19.672894 kubelet[2546]: E0413 20:10:19.672335 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:19.679730 containerd[1455]: time="2026-04-13T20:10:19.679593517Z" level=info msg="StartContainer for \"d81f919566a4b744d8eedb281c02ad10ce37bbb179508e8064239813fcc67781\" returns successfully" Apr 13 20:10:19.724970 kubelet[2546]: I0413 20:10:19.723400 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-pvwlp" podStartSLOduration=25.723384388 podStartE2EDuration="25.723384388s" podCreationTimestamp="2026-04-13 20:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:10:19.693136004 +0000 UTC m=+31.373818054" watchObservedRunningTime="2026-04-13 20:10:19.723384388 +0000 UTC m=+31.404066438" Apr 13 20:10:19.790076 containerd[1455]: time="2026-04-13T20:10:19.790038460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b9d58c8c6-fxrn8,Uid:a7e119dc-3238-4cbb-af6c-ff92f19fcb51,Namespace:calico-system,Attempt:1,} returns sandbox id \"4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d\"" Apr 13 20:10:19.837440 containerd[1455]: time="2026-04-13T20:10:19.837389912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b94c657f6-hprcx,Uid:b3bdefa9-c8ec-44ce-adcf-e2699994c10f,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b\"" Apr 13 20:10:20.115099 kernel: calico-node[4467]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:10:20.135115 systemd-networkd[1379]: cali337f2feebcb: Gained IPv6LL Apr 13 20:10:20.262988 systemd-networkd[1379]: cali9d6eec9c2ac: Gained IPv6LL Apr 13 20:10:20.265096 systemd-networkd[1379]: calif47d518ecdd: Gained IPv6LL Apr 13 20:10:20.327485 systemd-networkd[1379]: cali450bce1eaec: Gained IPv6LL Apr 13 20:10:20.392051 systemd-networkd[1379]: cali5393636397f: Gained IPv6LL Apr 13 20:10:20.394155 systemd-networkd[1379]: calie1e3360de88: Gained IPv6LL Apr 13 20:10:20.436531 kubelet[2546]: I0413 20:10:20.436111 2546 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="878df965-5680-45b6-bea3-fb9201081031" path="/var/lib/kubelet/pods/878df965-5680-45b6-bea3-fb9201081031/volumes" Apr 13 20:10:20.687271 kubelet[2546]: E0413 20:10:20.686546 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:20.687271 kubelet[2546]: E0413 20:10:20.686777 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:20.716623 kubelet[2546]: I0413 20:10:20.716374 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-rwvfl" podStartSLOduration=26.716360269 podStartE2EDuration="26.716360269s" podCreationTimestamp="2026-04-13 20:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:10:20.715560029 +0000 UTC m=+32.396242109" watchObservedRunningTime="2026-04-13 20:10:20.716360269 +0000 UTC m=+32.397042349" Apr 13 20:10:20.788421 systemd-networkd[1379]: vxlan.calico: Link UP Apr 13 20:10:20.788431 systemd-networkd[1379]: vxlan.calico: Gained carrier Apr 13 20:10:20.903044 systemd-networkd[1379]: cali63b9667d366: Gained IPv6LL Apr 13 20:10:21.403469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3866820822.mount: Deactivated successfully. Apr 13 20:10:21.543765 systemd-networkd[1379]: cali2ee45c32c7b: Gained IPv6LL Apr 13 20:10:21.690567 kubelet[2546]: E0413 20:10:21.689715 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:21.692154 kubelet[2546]: E0413 20:10:21.691611 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:21.787914 containerd[1455]: time="2026-04-13T20:10:21.787698473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:21.789047 containerd[1455]: time="2026-04-13T20:10:21.788473213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:10:21.789238 containerd[1455]: time="2026-04-13T20:10:21.789170693Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:21.791316 containerd[1455]: time="2026-04-13T20:10:21.791293534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:21.792216 containerd[1455]: time="2026-04-13T20:10:21.792080165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.595118524s" Apr 13 20:10:21.792216 containerd[1455]: time="2026-04-13T20:10:21.792137985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:10:21.793913 containerd[1455]: time="2026-04-13T20:10:21.793886296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:10:21.796971 containerd[1455]: time="2026-04-13T20:10:21.796942727Z" level=info msg="CreateContainer within sandbox \"77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:10:21.819946 containerd[1455]: time="2026-04-13T20:10:21.819424240Z" level=info msg="CreateContainer within sandbox \"77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"add094a7b15b0624f8aaef11a1ca7d2555856ea7aa9a2751164475a5413e52ec\"" Apr 13 20:10:21.820729 containerd[1455]: time="2026-04-13T20:10:21.820681540Z" level=info msg="StartContainer for \"add094a7b15b0624f8aaef11a1ca7d2555856ea7aa9a2751164475a5413e52ec\"" Apr 13 20:10:21.871196 systemd[1]: Started cri-containerd-add094a7b15b0624f8aaef11a1ca7d2555856ea7aa9a2751164475a5413e52ec.scope - libcontainer container add094a7b15b0624f8aaef11a1ca7d2555856ea7aa9a2751164475a5413e52ec. Apr 13 20:10:21.925405 containerd[1455]: time="2026-04-13T20:10:21.925372556Z" level=info msg="StartContainer for \"add094a7b15b0624f8aaef11a1ca7d2555856ea7aa9a2751164475a5413e52ec\" returns successfully" Apr 13 20:10:21.947862 systemd[1]: run-containerd-runc-k8s.io-add094a7b15b0624f8aaef11a1ca7d2555856ea7aa9a2751164475a5413e52ec-runc.uuGsBQ.mount: Deactivated successfully. Apr 13 20:10:22.697712 kubelet[2546]: E0413 20:10:22.697667 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:22.761783 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Apr 13 20:10:23.502965 containerd[1455]: time="2026-04-13T20:10:23.502748046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:23.504405 containerd[1455]: time="2026-04-13T20:10:23.503966448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:10:23.506031 containerd[1455]: time="2026-04-13T20:10:23.505085988Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:23.507818 containerd[1455]: time="2026-04-13T20:10:23.507791080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:23.508650 containerd[1455]: time="2026-04-13T20:10:23.508625360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.714672914s" Apr 13 20:10:23.508818 containerd[1455]: time="2026-04-13T20:10:23.508766120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:10:23.511152 containerd[1455]: time="2026-04-13T20:10:23.511135652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:10:23.512582 containerd[1455]: time="2026-04-13T20:10:23.512561012Z" level=info msg="CreateContainer within sandbox \"ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:10:23.524920 containerd[1455]: time="2026-04-13T20:10:23.524895800Z" level=info msg="CreateContainer within sandbox \"ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f8633f1d3a8e366e993899f4fc7bbfa01d6f5947f99835f7f2bbc95b8eca059\"" Apr 13 20:10:23.526730 containerd[1455]: time="2026-04-13T20:10:23.526710190Z" level=info msg="StartContainer for \"9f8633f1d3a8e366e993899f4fc7bbfa01d6f5947f99835f7f2bbc95b8eca059\"" Apr 13 20:10:23.579032 systemd[1]: Started cri-containerd-9f8633f1d3a8e366e993899f4fc7bbfa01d6f5947f99835f7f2bbc95b8eca059.scope - libcontainer container 9f8633f1d3a8e366e993899f4fc7bbfa01d6f5947f99835f7f2bbc95b8eca059. Apr 13 20:10:23.626453 containerd[1455]: time="2026-04-13T20:10:23.626393049Z" level=info msg="StartContainer for \"9f8633f1d3a8e366e993899f4fc7bbfa01d6f5947f99835f7f2bbc95b8eca059\" returns successfully" Apr 13 20:10:23.703897 kubelet[2546]: E0413 20:10:23.702960 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:10:23.706389 kubelet[2546]: I0413 20:10:23.706095 2546 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:23.719952 kubelet[2546]: I0413 20:10:23.719564 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-wnbgx" podStartSLOduration=17.119687996 podStartE2EDuration="19.719553334s" podCreationTimestamp="2026-04-13 20:10:04 +0000 UTC" firstStartedPulling="2026-04-13 20:10:19.193775858 +0000 UTC m=+30.874457908" lastFinishedPulling="2026-04-13 20:10:21.793641196 +0000 UTC m=+33.474323246" observedRunningTime="2026-04-13 20:10:22.710253797 +0000 UTC m=+34.390935847" watchObservedRunningTime="2026-04-13 20:10:23.719553334 +0000 UTC m=+35.400235384" Apr 13 20:10:24.704532 kubelet[2546]: I0413 20:10:24.704504 2546 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:25.726897 containerd[1455]: time="2026-04-13T20:10:25.726827962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:25.727823 containerd[1455]: time="2026-04-13T20:10:25.727708502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:10:25.728471 containerd[1455]: time="2026-04-13T20:10:25.728433303Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:25.730778 containerd[1455]: time="2026-04-13T20:10:25.730756804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:25.731562 containerd[1455]: time="2026-04-13T20:10:25.731312905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.219869753s" Apr 13 20:10:25.731562 containerd[1455]: time="2026-04-13T20:10:25.731339255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:10:25.732297 containerd[1455]: time="2026-04-13T20:10:25.732271305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:10:25.747738 containerd[1455]: time="2026-04-13T20:10:25.747697415Z" level=info msg="CreateContainer within sandbox \"562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:10:25.755816 containerd[1455]: time="2026-04-13T20:10:25.755771670Z" level=info msg="CreateContainer within sandbox \"562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"51cfe7a4faeda1cc6c5aaa1e3935edfee1bf659be946c742f29197f8d90c5a66\"" Apr 13 20:10:25.758834 containerd[1455]: time="2026-04-13T20:10:25.757541181Z" level=info msg="StartContainer for \"51cfe7a4faeda1cc6c5aaa1e3935edfee1bf659be946c742f29197f8d90c5a66\"" Apr 13 20:10:25.797002 systemd[1]: Started cri-containerd-51cfe7a4faeda1cc6c5aaa1e3935edfee1bf659be946c742f29197f8d90c5a66.scope - libcontainer container 51cfe7a4faeda1cc6c5aaa1e3935edfee1bf659be946c742f29197f8d90c5a66. Apr 13 20:10:25.835046 containerd[1455]: time="2026-04-13T20:10:25.834336761Z" level=info msg="StartContainer for \"51cfe7a4faeda1cc6c5aaa1e3935edfee1bf659be946c742f29197f8d90c5a66\" returns successfully" Apr 13 20:10:26.483280 containerd[1455]: time="2026-04-13T20:10:26.482439626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:26.483280 containerd[1455]: time="2026-04-13T20:10:26.483237257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:10:26.483902 containerd[1455]: time="2026-04-13T20:10:26.483851408Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:26.485896 containerd[1455]: time="2026-04-13T20:10:26.485603449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:26.486830 containerd[1455]: time="2026-04-13T20:10:26.486677380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 754.377434ms" Apr 13 20:10:26.486830 containerd[1455]: time="2026-04-13T20:10:26.486707530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:10:26.494598 containerd[1455]: time="2026-04-13T20:10:26.494569165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:10:26.496497 containerd[1455]: time="2026-04-13T20:10:26.496473717Z" level=info msg="CreateContainer within sandbox \"817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:10:26.514808 containerd[1455]: time="2026-04-13T20:10:26.514772788Z" level=info msg="CreateContainer within sandbox \"817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1ab7fda5273ca8cdb61e053fe4ceb81d1f45ed134a63b99efa8b8ea215d11ae8\"" Apr 13 20:10:26.515559 containerd[1455]: time="2026-04-13T20:10:26.515479099Z" level=info msg="StartContainer for \"1ab7fda5273ca8cdb61e053fe4ceb81d1f45ed134a63b99efa8b8ea215d11ae8\"" Apr 13 20:10:26.545066 systemd[1]: Started cri-containerd-1ab7fda5273ca8cdb61e053fe4ceb81d1f45ed134a63b99efa8b8ea215d11ae8.scope - libcontainer container 1ab7fda5273ca8cdb61e053fe4ceb81d1f45ed134a63b99efa8b8ea215d11ae8. Apr 13 20:10:26.576378 containerd[1455]: time="2026-04-13T20:10:26.576337989Z" level=info msg="StartContainer for \"1ab7fda5273ca8cdb61e053fe4ceb81d1f45ed134a63b99efa8b8ea215d11ae8\" returns successfully" Apr 13 20:10:26.703695 containerd[1455]: time="2026-04-13T20:10:26.703648764Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:26.704366 containerd[1455]: time="2026-04-13T20:10:26.704319344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 20:10:26.706233 containerd[1455]: time="2026-04-13T20:10:26.706206985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 211.6066ms" Apr 13 20:10:26.706373 containerd[1455]: time="2026-04-13T20:10:26.706235365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:10:26.708914 containerd[1455]: time="2026-04-13T20:10:26.707659097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:10:26.712051 containerd[1455]: time="2026-04-13T20:10:26.711608269Z" level=info msg="CreateContainer within sandbox \"4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:10:26.729426 kubelet[2546]: I0413 20:10:26.728429 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7b9d58c8c6-2hljs" podStartSLOduration=19.473969937 podStartE2EDuration="23.72841306s" podCreationTimestamp="2026-04-13 20:10:03 +0000 UTC" firstStartedPulling="2026-04-13 20:10:19.255528388 +0000 UTC m=+30.936210438" lastFinishedPulling="2026-04-13 20:10:23.509971511 +0000 UTC m=+35.190653561" observedRunningTime="2026-04-13 20:10:23.720500185 +0000 UTC m=+35.401182235" watchObservedRunningTime="2026-04-13 20:10:26.72841306 +0000 UTC m=+38.409095140" Apr 13 20:10:26.730907 kubelet[2546]: I0413 20:10:26.730384 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c47c8f584-8rsdp" podStartSLOduration=16.569995762 podStartE2EDuration="22.730348901s" podCreationTimestamp="2026-04-13 20:10:04 +0000 UTC" firstStartedPulling="2026-04-13 20:10:19.571812646 +0000 UTC m=+31.252494706" lastFinishedPulling="2026-04-13 20:10:25.732165785 +0000 UTC m=+37.412847845" observedRunningTime="2026-04-13 20:10:26.7282982 +0000 UTC m=+38.408980250" watchObservedRunningTime="2026-04-13 20:10:26.730348901 +0000 UTC m=+38.411030951" Apr 13 20:10:26.738597 containerd[1455]: time="2026-04-13T20:10:26.738477636Z" level=info msg="CreateContainer within sandbox \"4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a505e2213067aa0b551f66d350cd205461a87fdce35f0868dce02bd41ae0f369\"" Apr 13 20:10:26.744898 containerd[1455]: time="2026-04-13T20:10:26.742124119Z" level=info msg="StartContainer for \"a505e2213067aa0b551f66d350cd205461a87fdce35f0868dce02bd41ae0f369\"" Apr 13 20:10:26.787004 systemd[1]: Started cri-containerd-a505e2213067aa0b551f66d350cd205461a87fdce35f0868dce02bd41ae0f369.scope - libcontainer container a505e2213067aa0b551f66d350cd205461a87fdce35f0868dce02bd41ae0f369. Apr 13 20:10:26.831065 containerd[1455]: time="2026-04-13T20:10:26.830973248Z" level=info msg="StartContainer for \"a505e2213067aa0b551f66d350cd205461a87fdce35f0868dce02bd41ae0f369\" returns successfully" Apr 13 20:10:27.615688 containerd[1455]: time="2026-04-13T20:10:27.615603191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:27.616582 containerd[1455]: time="2026-04-13T20:10:27.616507961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:10:27.617441 containerd[1455]: time="2026-04-13T20:10:27.617071882Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:27.619368 containerd[1455]: time="2026-04-13T20:10:27.619329064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:27.620284 containerd[1455]: time="2026-04-13T20:10:27.620242304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 912.553867ms" Apr 13 20:10:27.620344 containerd[1455]: time="2026-04-13T20:10:27.620287864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:10:27.622161 containerd[1455]: time="2026-04-13T20:10:27.621671945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:10:27.626572 containerd[1455]: time="2026-04-13T20:10:27.626539978Z" level=info msg="CreateContainer within sandbox \"4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:10:27.646905 containerd[1455]: time="2026-04-13T20:10:27.646753143Z" level=info msg="CreateContainer within sandbox \"4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0d1b3f122fa1bbdb59af42bd86d696e9e4723a47b6856302e715b9aa00a569de\"" Apr 13 20:10:27.647772 containerd[1455]: time="2026-04-13T20:10:27.647723874Z" level=info msg="StartContainer for \"0d1b3f122fa1bbdb59af42bd86d696e9e4723a47b6856302e715b9aa00a569de\"" Apr 13 20:10:27.685053 systemd[1]: Started cri-containerd-0d1b3f122fa1bbdb59af42bd86d696e9e4723a47b6856302e715b9aa00a569de.scope - libcontainer container 0d1b3f122fa1bbdb59af42bd86d696e9e4723a47b6856302e715b9aa00a569de. Apr 13 20:10:27.734339 containerd[1455]: time="2026-04-13T20:10:27.734293452Z" level=info msg="StartContainer for \"0d1b3f122fa1bbdb59af42bd86d696e9e4723a47b6856302e715b9aa00a569de\" returns successfully" Apr 13 20:10:27.753649 kubelet[2546]: I0413 20:10:27.753581 2546 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:28.487811 containerd[1455]: time="2026-04-13T20:10:28.487743047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:28.488845 containerd[1455]: time="2026-04-13T20:10:28.488662118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:10:28.489460 containerd[1455]: time="2026-04-13T20:10:28.489407648Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:28.492680 containerd[1455]: time="2026-04-13T20:10:28.491491830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:28.492680 containerd[1455]: time="2026-04-13T20:10:28.492510201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 870.802116ms" Apr 13 20:10:28.492680 containerd[1455]: time="2026-04-13T20:10:28.492542760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:10:28.494772 containerd[1455]: time="2026-04-13T20:10:28.494747072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:10:28.497157 containerd[1455]: time="2026-04-13T20:10:28.497132784Z" level=info msg="CreateContainer within sandbox \"817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:10:28.512094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3967592105.mount: Deactivated successfully. Apr 13 20:10:28.515269 containerd[1455]: time="2026-04-13T20:10:28.514763196Z" level=info msg="CreateContainer within sandbox \"817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a39a0827e9aa4a64a5d6e2c91de5d7ab070c5dbb8dfe5b4ce7f939a3cec800da\"" Apr 13 20:10:28.515435 containerd[1455]: time="2026-04-13T20:10:28.515413837Z" level=info msg="StartContainer for \"a39a0827e9aa4a64a5d6e2c91de5d7ab070c5dbb8dfe5b4ce7f939a3cec800da\"" Apr 13 20:10:28.560014 systemd[1]: Started cri-containerd-a39a0827e9aa4a64a5d6e2c91de5d7ab070c5dbb8dfe5b4ce7f939a3cec800da.scope - libcontainer container a39a0827e9aa4a64a5d6e2c91de5d7ab070c5dbb8dfe5b4ce7f939a3cec800da. Apr 13 20:10:28.588706 containerd[1455]: time="2026-04-13T20:10:28.588665208Z" level=info msg="StartContainer for \"a39a0827e9aa4a64a5d6e2c91de5d7ab070c5dbb8dfe5b4ce7f939a3cec800da\" returns successfully" Apr 13 20:10:28.740705 systemd[1]: run-containerd-runc-k8s.io-a39a0827e9aa4a64a5d6e2c91de5d7ab070c5dbb8dfe5b4ce7f939a3cec800da-runc.avQVOt.mount: Deactivated successfully. Apr 13 20:10:28.757261 kubelet[2546]: I0413 20:10:28.757228 2546 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:28.768726 kubelet[2546]: I0413 20:10:28.768078 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7b9d58c8c6-fxrn8" podStartSLOduration=18.855950020999998 podStartE2EDuration="25.768066165s" podCreationTimestamp="2026-04-13 20:10:03 +0000 UTC" firstStartedPulling="2026-04-13 20:10:19.795319632 +0000 UTC m=+31.476001682" lastFinishedPulling="2026-04-13 20:10:26.707435776 +0000 UTC m=+38.388117826" observedRunningTime="2026-04-13 20:10:27.765843474 +0000 UTC m=+39.446525524" watchObservedRunningTime="2026-04-13 20:10:28.768066165 +0000 UTC m=+40.448748215" Apr 13 20:10:29.459763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481253768.mount: Deactivated successfully. Apr 13 20:10:29.472716 containerd[1455]: time="2026-04-13T20:10:29.472658478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:29.473695 containerd[1455]: time="2026-04-13T20:10:29.473655179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:10:29.474128 containerd[1455]: time="2026-04-13T20:10:29.474089189Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:29.475904 containerd[1455]: time="2026-04-13T20:10:29.475694160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:29.476541 containerd[1455]: time="2026-04-13T20:10:29.476477921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 981.575059ms" Apr 13 20:10:29.476541 containerd[1455]: time="2026-04-13T20:10:29.476503911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:10:29.481201 containerd[1455]: time="2026-04-13T20:10:29.481109964Z" level=info msg="CreateContainer within sandbox \"4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:10:29.496778 containerd[1455]: time="2026-04-13T20:10:29.496750035Z" level=info msg="CreateContainer within sandbox \"4ff1191fe6aef5da4502e245bcd1f09e327aa8d2ce7cdbd64f77a0f0331b4d2b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e0a6295bb232fdce119c950b4bfe16362f8aaa9b2abfe485a60798503a982c1a\"" Apr 13 20:10:29.498442 containerd[1455]: time="2026-04-13T20:10:29.498049236Z" level=info msg="StartContainer for \"e0a6295bb232fdce119c950b4bfe16362f8aaa9b2abfe485a60798503a982c1a\"" Apr 13 20:10:29.534250 kubelet[2546]: I0413 20:10:29.534225 2546 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:10:29.534250 kubelet[2546]: I0413 20:10:29.534252 2546 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:10:29.536187 systemd[1]: Started cri-containerd-e0a6295bb232fdce119c950b4bfe16362f8aaa9b2abfe485a60798503a982c1a.scope - libcontainer container e0a6295bb232fdce119c950b4bfe16362f8aaa9b2abfe485a60798503a982c1a. Apr 13 20:10:29.585253 containerd[1455]: time="2026-04-13T20:10:29.585220700Z" level=info msg="StartContainer for \"e0a6295bb232fdce119c950b4bfe16362f8aaa9b2abfe485a60798503a982c1a\" returns successfully" Apr 13 20:10:29.770920 kubelet[2546]: I0413 20:10:29.770387 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-sgfs5" podStartSLOduration=16.950167437 podStartE2EDuration="25.770376653s" podCreationTimestamp="2026-04-13 20:10:04 +0000 UTC" firstStartedPulling="2026-04-13 20:10:19.673249475 +0000 UTC m=+31.353931525" lastFinishedPulling="2026-04-13 20:10:28.493458691 +0000 UTC m=+40.174140741" observedRunningTime="2026-04-13 20:10:28.769058315 +0000 UTC m=+40.449740365" watchObservedRunningTime="2026-04-13 20:10:29.770376653 +0000 UTC m=+41.451058703" Apr 13 20:10:44.531387 kubelet[2546]: I0413 20:10:44.531328 2546 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:44.605016 kubelet[2546]: I0413 20:10:44.603246 2546 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-6b94c657f6-hprcx" podStartSLOduration=16.965769795 podStartE2EDuration="26.603234204s" podCreationTimestamp="2026-04-13 20:10:18 +0000 UTC" firstStartedPulling="2026-04-13 20:10:19.839847433 +0000 UTC m=+31.520529493" lastFinishedPulling="2026-04-13 20:10:29.477311852 +0000 UTC m=+41.157993902" observedRunningTime="2026-04-13 20:10:29.770751773 +0000 UTC m=+41.451433823" watchObservedRunningTime="2026-04-13 20:10:44.603234204 +0000 UTC m=+56.283916254" Apr 13 20:10:46.240483 kubelet[2546]: I0413 20:10:46.239907 2546 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:48.438182 containerd[1455]: time="2026-04-13T20:10:48.438057503Z" level=info msg="StopPodSandbox for \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\"" Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.495 [WARNING][5333] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9c60b64c-c287-49a6-9e8f-117d46909ac0", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c", Pod:"coredns-7d764666f9-pvwlp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5393636397f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.495 [INFO][5333] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.495 [INFO][5333] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" iface="eth0" netns="" Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.495 [INFO][5333] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.495 [INFO][5333] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.530 [INFO][5340] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" HandleID="k8s-pod-network.36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.530 [INFO][5340] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.530 [INFO][5340] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.536 [WARNING][5340] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" HandleID="k8s-pod-network.36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.536 [INFO][5340] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" HandleID="k8s-pod-network.36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.538 [INFO][5340] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:48.544571 containerd[1455]: 2026-04-13 20:10:48.541 [INFO][5333] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:48.545525 containerd[1455]: time="2026-04-13T20:10:48.544695771Z" level=info msg="TearDown network for sandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\" successfully" Apr 13 20:10:48.545525 containerd[1455]: time="2026-04-13T20:10:48.544720251Z" level=info msg="StopPodSandbox for \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\" returns successfully" Apr 13 20:10:48.545525 containerd[1455]: time="2026-04-13T20:10:48.545336131Z" level=info msg="RemovePodSandbox for \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\"" Apr 13 20:10:48.545525 containerd[1455]: time="2026-04-13T20:10:48.545371521Z" level=info msg="Forcibly stopping sandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\"" Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.582 [WARNING][5354] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9c60b64c-c287-49a6-9e8f-117d46909ac0", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"5bc3f721ae8a242e8173f4d47bb10d28154d35baa5198629a586d64e2fc27a2c", Pod:"coredns-7d764666f9-pvwlp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5393636397f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.582 [INFO][5354] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.582 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" iface="eth0" netns="" Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.582 [INFO][5354] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.582 [INFO][5354] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.604 [INFO][5362] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" HandleID="k8s-pod-network.36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.604 [INFO][5362] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.604 [INFO][5362] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.611 [WARNING][5362] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" HandleID="k8s-pod-network.36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.611 [INFO][5362] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" HandleID="k8s-pod-network.36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Workload="172--239--193--192-k8s-coredns--7d764666f9--pvwlp-eth0" Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.613 [INFO][5362] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:48.618417 containerd[1455]: 2026-04-13 20:10:48.615 [INFO][5354] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb" Apr 13 20:10:48.619185 containerd[1455]: time="2026-04-13T20:10:48.618486530Z" level=info msg="TearDown network for sandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\" successfully" Apr 13 20:10:48.624027 containerd[1455]: time="2026-04-13T20:10:48.623993704Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:48.624119 containerd[1455]: time="2026-04-13T20:10:48.624090254Z" level=info msg="RemovePodSandbox \"36b8f9be09e42d39d061b886f1c59cb05117b52439a84e830ce8306c5e3e7edb\" returns successfully" Apr 13 20:10:48.624667 containerd[1455]: time="2026-04-13T20:10:48.624647055Z" level=info msg="StopPodSandbox for \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\"" Apr 13 20:10:48.663089 systemd[1]: run-containerd-runc-k8s.io-6fa8ecd0c3cf6f7ba76097a63435a4338972c3a5085549e318e9064e34d1347a-runc.F95XfF.mount: Deactivated successfully. Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.697 [WARNING][5377] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0", GenerateName:"calico-apiserver-7b9d58c8c6-", Namespace:"calico-system", SelfLink:"", UID:"bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b9d58c8c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92", Pod:"calico-apiserver-7b9d58c8c6-2hljs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif47d518ecdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.697 [INFO][5377] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.697 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" iface="eth0" netns="" Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.697 [INFO][5377] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.697 [INFO][5377] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.719 [INFO][5406] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" HandleID="k8s-pod-network.aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.719 [INFO][5406] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.720 [INFO][5406] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.726 [WARNING][5406] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" HandleID="k8s-pod-network.aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.726 [INFO][5406] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" HandleID="k8s-pod-network.aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.730 [INFO][5406] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:48.739096 containerd[1455]: 2026-04-13 20:10:48.732 [INFO][5377] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:48.739096 containerd[1455]: time="2026-04-13T20:10:48.738725680Z" level=info msg="TearDown network for sandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\" successfully" Apr 13 20:10:48.739096 containerd[1455]: time="2026-04-13T20:10:48.738785620Z" level=info msg="StopPodSandbox for \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\" returns successfully" Apr 13 20:10:48.740548 containerd[1455]: time="2026-04-13T20:10:48.740509672Z" level=info msg="RemovePodSandbox for \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\"" Apr 13 20:10:48.740548 containerd[1455]: time="2026-04-13T20:10:48.740535402Z" level=info msg="Forcibly stopping sandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\"" Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.808 [WARNING][5422] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0", GenerateName:"calico-apiserver-7b9d58c8c6-", Namespace:"calico-system", SelfLink:"", UID:"bd97d4d8-3ec3-43d7-ba64-c8ae0cc8d162", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b9d58c8c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"ecf1babaf3ba8d07a92e61e294166fc6c039b7262460a1a09092682ccb024f92", Pod:"calico-apiserver-7b9d58c8c6-2hljs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif47d518ecdd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.808 [INFO][5422] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.808 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" iface="eth0" netns="" Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.808 [INFO][5422] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.808 [INFO][5422] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.835 [INFO][5429] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" HandleID="k8s-pod-network.aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.835 [INFO][5429] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.835 [INFO][5429] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.841 [WARNING][5429] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" HandleID="k8s-pod-network.aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.842 [INFO][5429] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" HandleID="k8s-pod-network.aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--2hljs-eth0" Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.844 [INFO][5429] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:48.848504 containerd[1455]: 2026-04-13 20:10:48.846 [INFO][5422] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c" Apr 13 20:10:48.849073 containerd[1455]: time="2026-04-13T20:10:48.848537871Z" level=info msg="TearDown network for sandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\" successfully" Apr 13 20:10:48.851667 containerd[1455]: time="2026-04-13T20:10:48.851630773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:48.851805 containerd[1455]: time="2026-04-13T20:10:48.851780533Z" level=info msg="RemovePodSandbox \"aad2247653d5c7416a60d9505ea9b34e4d3e4ca7cb8c379a5484a642f4c2671c\" returns successfully" Apr 13 20:10:48.852600 containerd[1455]: time="2026-04-13T20:10:48.852292363Z" level=info msg="StopPodSandbox for \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\"" Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.884 [WARNING][5443] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"fae17250-1960-43f1-bcdd-744eb4b3f5bd", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a", Pod:"goldmane-9f7667bb8-wnbgx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1e3360de88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.885 [INFO][5443] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.885 [INFO][5443] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" iface="eth0" netns="" Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.885 [INFO][5443] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.885 [INFO][5443] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.904 [INFO][5450] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" HandleID="k8s-pod-network.31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.904 [INFO][5450] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.904 [INFO][5450] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.910 [WARNING][5450] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" HandleID="k8s-pod-network.31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.910 [INFO][5450] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" HandleID="k8s-pod-network.31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.912 [INFO][5450] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:48.917136 containerd[1455]: 2026-04-13 20:10:48.914 [INFO][5443] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:48.917531 containerd[1455]: time="2026-04-13T20:10:48.917178393Z" level=info msg="TearDown network for sandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\" successfully" Apr 13 20:10:48.917531 containerd[1455]: time="2026-04-13T20:10:48.917205763Z" level=info msg="StopPodSandbox for \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\" returns successfully" Apr 13 20:10:48.918385 containerd[1455]: time="2026-04-13T20:10:48.918051924Z" level=info msg="RemovePodSandbox for \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\"" Apr 13 20:10:48.918385 containerd[1455]: time="2026-04-13T20:10:48.918090544Z" level=info msg="Forcibly stopping sandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\"" Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.953 [WARNING][5464] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"fae17250-1960-43f1-bcdd-744eb4b3f5bd", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"77292ffb062f130abfe03935ebd5fc6f44a18c471824679103d0304ae439ef5a", Pod:"goldmane-9f7667bb8-wnbgx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1e3360de88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.953 [INFO][5464] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.953 [INFO][5464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" iface="eth0" netns="" Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.953 [INFO][5464] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.953 [INFO][5464] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.973 [INFO][5472] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" HandleID="k8s-pod-network.31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.973 [INFO][5472] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.973 [INFO][5472] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.978 [WARNING][5472] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" HandleID="k8s-pod-network.31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.978 [INFO][5472] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" HandleID="k8s-pod-network.31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Workload="172--239--193--192-k8s-goldmane--9f7667bb8--wnbgx-eth0" Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.979 [INFO][5472] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:48.984715 containerd[1455]: 2026-04-13 20:10:48.981 [INFO][5464] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0" Apr 13 20:10:48.984715 containerd[1455]: time="2026-04-13T20:10:48.984132775Z" level=info msg="TearDown network for sandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\" successfully" Apr 13 20:10:48.988165 containerd[1455]: time="2026-04-13T20:10:48.988137238Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:48.988243 containerd[1455]: time="2026-04-13T20:10:48.988198308Z" level=info msg="RemovePodSandbox \"31c7480711beab14f13181e6ca38e241cba0f257abacdff2f8a0f21a5378aaa0\" returns successfully" Apr 13 20:10:48.988666 containerd[1455]: time="2026-04-13T20:10:48.988626279Z" level=info msg="StopPodSandbox for \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\"" Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.019 [WARNING][5486] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0", GenerateName:"calico-apiserver-7b9d58c8c6-", Namespace:"calico-system", SelfLink:"", UID:"a7e119dc-3238-4cbb-af6c-ff92f19fcb51", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b9d58c8c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d", Pod:"calico-apiserver-7b9d58c8c6-fxrn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali337f2feebcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.019 [INFO][5486] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.019 [INFO][5486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" iface="eth0" netns="" Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.019 [INFO][5486] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.019 [INFO][5486] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.037 [INFO][5493] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" HandleID="k8s-pod-network.c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.038 [INFO][5493] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.038 [INFO][5493] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.044 [WARNING][5493] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" HandleID="k8s-pod-network.c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.044 [INFO][5493] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" HandleID="k8s-pod-network.c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.046 [INFO][5493] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:49.051851 containerd[1455]: 2026-04-13 20:10:49.049 [INFO][5486] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:49.051851 containerd[1455]: time="2026-04-13T20:10:49.051732677Z" level=info msg="TearDown network for sandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\" successfully" Apr 13 20:10:49.051851 containerd[1455]: time="2026-04-13T20:10:49.051758457Z" level=info msg="StopPodSandbox for \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\" returns successfully" Apr 13 20:10:49.052585 containerd[1455]: time="2026-04-13T20:10:49.052561848Z" level=info msg="RemovePodSandbox for \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\"" Apr 13 20:10:49.052642 containerd[1455]: time="2026-04-13T20:10:49.052591988Z" level=info msg="Forcibly stopping sandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\"" Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.093 [WARNING][5507] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0", GenerateName:"calico-apiserver-7b9d58c8c6-", Namespace:"calico-system", SelfLink:"", UID:"a7e119dc-3238-4cbb-af6c-ff92f19fcb51", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b9d58c8c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"4b38553f11710f964e25d14ccdc8a53c84d98ac38e4cdc2cc006237f81dc233d", Pod:"calico-apiserver-7b9d58c8c6-fxrn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali337f2feebcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.094 [INFO][5507] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.094 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" iface="eth0" netns="" Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.094 [INFO][5507] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.094 [INFO][5507] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.114 [INFO][5515] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" HandleID="k8s-pod-network.c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.114 [INFO][5515] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.114 [INFO][5515] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.120 [WARNING][5515] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" HandleID="k8s-pod-network.c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.120 [INFO][5515] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" HandleID="k8s-pod-network.c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Workload="172--239--193--192-k8s-calico--apiserver--7b9d58c8c6--fxrn8-eth0" Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.122 [INFO][5515] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:49.129258 containerd[1455]: 2026-04-13 20:10:49.125 [INFO][5507] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da" Apr 13 20:10:49.129646 containerd[1455]: time="2026-04-13T20:10:49.129303108Z" level=info msg="TearDown network for sandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\" successfully" Apr 13 20:10:49.132735 containerd[1455]: time="2026-04-13T20:10:49.132708822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:49.132838 containerd[1455]: time="2026-04-13T20:10:49.132760802Z" level=info msg="RemovePodSandbox \"c42302d23319f9118f4d2fe4dae384d9c5c683c6fe1b92ac0adbaec5a2dc77da\" returns successfully" Apr 13 20:10:49.133933 containerd[1455]: time="2026-04-13T20:10:49.133610953Z" level=info msg="StopPodSandbox for \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\"" Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.166 [WARNING][5531] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"84d67424-50c0-442a-9169-b582a1cca729", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1", Pod:"coredns-7d764666f9-rwvfl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali450bce1eaec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.167 [INFO][5531] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.167 [INFO][5531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" iface="eth0" netns="" Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.167 [INFO][5531] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.167 [INFO][5531] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.189 [INFO][5538] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" HandleID="k8s-pod-network.b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.189 [INFO][5538] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.189 [INFO][5538] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.195 [WARNING][5538] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" HandleID="k8s-pod-network.b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.195 [INFO][5538] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" HandleID="k8s-pod-network.b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.197 [INFO][5538] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:49.202341 containerd[1455]: 2026-04-13 20:10:49.200 [INFO][5531] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:49.202908 containerd[1455]: time="2026-04-13T20:10:49.202362656Z" level=info msg="TearDown network for sandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\" successfully" Apr 13 20:10:49.202908 containerd[1455]: time="2026-04-13T20:10:49.202384296Z" level=info msg="StopPodSandbox for \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\" returns successfully" Apr 13 20:10:49.202908 containerd[1455]: time="2026-04-13T20:10:49.202843777Z" level=info msg="RemovePodSandbox for \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\"" Apr 13 20:10:49.202908 containerd[1455]: time="2026-04-13T20:10:49.202889637Z" level=info msg="Forcibly stopping sandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\"" Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.236 [WARNING][5552] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"84d67424-50c0-442a-9169-b582a1cca729", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"f31a00bbdc371bc22e3fdf2fe22c4645b24badf1c885869b2ac7121f38c6bfa1", Pod:"coredns-7d764666f9-rwvfl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali450bce1eaec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.236 [INFO][5552] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.236 [INFO][5552] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" iface="eth0" netns="" Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.236 [INFO][5552] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.236 [INFO][5552] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.269 [INFO][5559] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" HandleID="k8s-pod-network.b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.269 [INFO][5559] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.269 [INFO][5559] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.279 [WARNING][5559] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" HandleID="k8s-pod-network.b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.280 [INFO][5559] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" HandleID="k8s-pod-network.b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Workload="172--239--193--192-k8s-coredns--7d764666f9--rwvfl-eth0" Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.281 [INFO][5559] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:49.288082 containerd[1455]: 2026-04-13 20:10:49.285 [INFO][5552] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02" Apr 13 20:10:49.289790 containerd[1455]: time="2026-04-13T20:10:49.288170515Z" level=info msg="TearDown network for sandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\" successfully" Apr 13 20:10:49.291670 containerd[1455]: time="2026-04-13T20:10:49.291630819Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:49.291775 containerd[1455]: time="2026-04-13T20:10:49.291706449Z" level=info msg="RemovePodSandbox \"b28c07e658c8ab482eb1bd318647303a1c472beaa49307594050fce1fc896f02\" returns successfully" Apr 13 20:10:49.292121 containerd[1455]: time="2026-04-13T20:10:49.292101979Z" level=info msg="StopPodSandbox for \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\"" Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.322 [WARNING][5573] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-csi--node--driver--sgfs5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380", Pod:"csi-node-driver-sgfs5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63b9667d366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.323 [INFO][5573] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.323 [INFO][5573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" iface="eth0" netns="" Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.323 [INFO][5573] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.323 [INFO][5573] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.344 [INFO][5580] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" HandleID="k8s-pod-network.0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.344 [INFO][5580] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.344 [INFO][5580] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.350 [WARNING][5580] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" HandleID="k8s-pod-network.0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.350 [INFO][5580] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" HandleID="k8s-pod-network.0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.351 [INFO][5580] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:49.358466 containerd[1455]: 2026-04-13 20:10:49.355 [INFO][5573] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:49.359112 containerd[1455]: time="2026-04-13T20:10:49.358484360Z" level=info msg="TearDown network for sandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\" successfully" Apr 13 20:10:49.359112 containerd[1455]: time="2026-04-13T20:10:49.358505210Z" level=info msg="StopPodSandbox for \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\" returns successfully" Apr 13 20:10:49.359112 containerd[1455]: time="2026-04-13T20:10:49.358944141Z" level=info msg="RemovePodSandbox for \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\"" Apr 13 20:10:49.359112 containerd[1455]: time="2026-04-13T20:10:49.358967471Z" level=info msg="Forcibly stopping sandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\"" Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.394 [WARNING][5594] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-csi--node--driver--sgfs5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ce35ba45-78f2-4c5a-8951-d0c6d05d9ea9", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"817a8b5ea3a0c6be7bbaec718187245944d68ccf358a6c977035d93cf19a3380", Pod:"csi-node-driver-sgfs5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali63b9667d366", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.394 [INFO][5594] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.394 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" iface="eth0" netns="" Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.394 [INFO][5594] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.394 [INFO][5594] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.414 [INFO][5601] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" HandleID="k8s-pod-network.0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.414 [INFO][5601] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.414 [INFO][5601] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.420 [WARNING][5601] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" HandleID="k8s-pod-network.0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.420 [INFO][5601] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" HandleID="k8s-pod-network.0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Workload="172--239--193--192-k8s-csi--node--driver--sgfs5-eth0" Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.421 [INFO][5601] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:49.426909 containerd[1455]: 2026-04-13 20:10:49.424 [INFO][5594] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248" Apr 13 20:10:49.426909 containerd[1455]: time="2026-04-13T20:10:49.426237513Z" level=info msg="TearDown network for sandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\" successfully" Apr 13 20:10:49.430088 containerd[1455]: time="2026-04-13T20:10:49.429912766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:49.430088 containerd[1455]: time="2026-04-13T20:10:49.429982286Z" level=info msg="RemovePodSandbox \"0956f9429c96ae0754ef4063b00fbc46ff422d59902fa42cd4e878ea2a5ad248\" returns successfully" Apr 13 20:10:49.430456 containerd[1455]: time="2026-04-13T20:10:49.430406746Z" level=info msg="StopPodSandbox for \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\"" Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.459 [WARNING][5615] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" WorkloadEndpoint="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.459 [INFO][5615] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.459 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" iface="eth0" netns="" Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.459 [INFO][5615] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.459 [INFO][5615] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.480 [INFO][5622] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" HandleID="k8s-pod-network.5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Workload="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.480 [INFO][5622] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.480 [INFO][5622] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.485 [WARNING][5622] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" HandleID="k8s-pod-network.5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Workload="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.486 [INFO][5622] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" HandleID="k8s-pod-network.5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Workload="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.487 [INFO][5622] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:49.491361 containerd[1455]: 2026-04-13 20:10:49.489 [INFO][5615] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:49.492004 containerd[1455]: time="2026-04-13T20:10:49.491395323Z" level=info msg="TearDown network for sandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\" successfully" Apr 13 20:10:49.492004 containerd[1455]: time="2026-04-13T20:10:49.491419703Z" level=info msg="StopPodSandbox for \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\" returns successfully" Apr 13 20:10:49.492004 containerd[1455]: time="2026-04-13T20:10:49.491820694Z" level=info msg="RemovePodSandbox for \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\"" Apr 13 20:10:49.492004 containerd[1455]: time="2026-04-13T20:10:49.491843574Z" level=info msg="Forcibly stopping sandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\"" Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.521 [WARNING][5636] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" WorkloadEndpoint="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.521 [INFO][5636] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.521 [INFO][5636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" iface="eth0" netns="" Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.521 [INFO][5636] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.521 [INFO][5636] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.541 [INFO][5643] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" HandleID="k8s-pod-network.5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Workload="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.542 [INFO][5643] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.542 [INFO][5643] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.547 [WARNING][5643] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" HandleID="k8s-pod-network.5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Workload="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.547 [INFO][5643] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" HandleID="k8s-pod-network.5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Workload="172--239--193--192-k8s-whisker--785cc5bd95--dmhqs-eth0" Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.548 [INFO][5643] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:49.555016 containerd[1455]: 2026-04-13 20:10:49.551 [INFO][5636] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555" Apr 13 20:10:49.555016 containerd[1455]: time="2026-04-13T20:10:49.553645410Z" level=info msg="TearDown network for sandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\" successfully" Apr 13 20:10:49.557801 containerd[1455]: time="2026-04-13T20:10:49.557770695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:49.557946 containerd[1455]: time="2026-04-13T20:10:49.557836795Z" level=info msg="RemovePodSandbox \"5eeddd0186c3a537577b0766f5b63e93f47c070c5a30973be4bf9829ed438555\" returns successfully" Apr 13 20:10:49.558481 containerd[1455]: time="2026-04-13T20:10:49.558444145Z" level=info msg="StopPodSandbox for \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\"" Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.594 [WARNING][5657] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0", GenerateName:"calico-kube-controllers-7c47c8f584-", Namespace:"calico-system", SelfLink:"", UID:"1352fc40-7380-4f40-97a5-2db21f2695cc", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c47c8f584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75", Pod:"calico-kube-controllers-7c47c8f584-8rsdp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9d6eec9c2ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.596 [INFO][5657] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.596 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" iface="eth0" netns="" Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.596 [INFO][5657] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.596 [INFO][5657] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.645 [INFO][5667] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" HandleID="k8s-pod-network.beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.645 [INFO][5667] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.646 [INFO][5667] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.651 [WARNING][5667] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" HandleID="k8s-pod-network.beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.651 [INFO][5667] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" HandleID="k8s-pod-network.beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.653 [INFO][5667] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:49.660882 containerd[1455]: 2026-04-13 20:10:49.657 [INFO][5657] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:49.660882 containerd[1455]: time="2026-04-13T20:10:49.660843050Z" level=info msg="TearDown network for sandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\" successfully" Apr 13 20:10:49.662729 containerd[1455]: time="2026-04-13T20:10:49.660865800Z" level=info msg="StopPodSandbox for \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\" returns successfully" Apr 13 20:10:49.662729 containerd[1455]: time="2026-04-13T20:10:49.661369780Z" level=info msg="RemovePodSandbox for \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\"" Apr 13 20:10:49.662729 containerd[1455]: time="2026-04-13T20:10:49.661396220Z" level=info msg="Forcibly stopping sandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\"" Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.703 [WARNING][5681] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0", GenerateName:"calico-kube-controllers-7c47c8f584-", Namespace:"calico-system", SelfLink:"", UID:"1352fc40-7380-4f40-97a5-2db21f2695cc", ResourceVersion:"1129", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 10, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c47c8f584", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-193-192", ContainerID:"562ab881efc6895af0f7b99d93ff6afa843b028ad4df90b81bd26eafa0248e75", Pod:"calico-kube-controllers-7c47c8f584-8rsdp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9d6eec9c2ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.703 [INFO][5681] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.703 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" iface="eth0" netns="" Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.703 [INFO][5681] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.703 [INFO][5681] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.731 [INFO][5688] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" HandleID="k8s-pod-network.beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.731 [INFO][5688] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.731 [INFO][5688] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.738 [WARNING][5688] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" HandleID="k8s-pod-network.beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.738 [INFO][5688] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" HandleID="k8s-pod-network.beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Workload="172--239--193--192-k8s-calico--kube--controllers--7c47c8f584--8rsdp-eth0" Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.739 [INFO][5688] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:49.745802 containerd[1455]: 2026-04-13 20:10:49.743 [INFO][5681] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4" Apr 13 20:10:49.746675 containerd[1455]: time="2026-04-13T20:10:49.745855178Z" level=info msg="TearDown network for sandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\" successfully" Apr 13 20:10:49.751504 containerd[1455]: time="2026-04-13T20:10:49.751107933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:49.751504 containerd[1455]: time="2026-04-13T20:10:49.751214093Z" level=info msg="RemovePodSandbox \"beb1f44d8f3b94b9cd171bfa2c881419d7a865a446eb77ab544005acc49e63b4\" returns successfully" Apr 13 20:10:53.453723 kubelet[2546]: I0413 20:10:53.453677 2546 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:10:59.427749 kubelet[2546]: E0413 20:10:59.427663 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:11:09.427213 kubelet[2546]: E0413 20:11:09.427075 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:11:09.455430 kubelet[2546]: I0413 20:11:09.455086 2546 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:11:15.427713 kubelet[2546]: E0413 20:11:15.427675 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:11:17.427950 kubelet[2546]: E0413 20:11:17.427909 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:11:34.132292 systemd[1]: run-containerd-runc-k8s.io-add094a7b15b0624f8aaef11a1ca7d2555856ea7aa9a2751164475a5413e52ec-runc.bYsrth.mount: Deactivated successfully. Apr 13 20:11:36.627203 systemd[1]: run-containerd-runc-k8s.io-51cfe7a4faeda1cc6c5aaa1e3935edfee1bf659be946c742f29197f8d90c5a66-runc.wCVSs2.mount: Deactivated successfully. Apr 13 20:11:38.429683 kubelet[2546]: E0413 20:11:38.429589 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:11:43.427973 kubelet[2546]: E0413 20:11:43.427937 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:11:44.431685 kubelet[2546]: E0413 20:11:44.430960 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:11:44.610582 systemd[1]: run-containerd-runc-k8s.io-51cfe7a4faeda1cc6c5aaa1e3935edfee1bf659be946c742f29197f8d90c5a66-runc.NoKOvt.mount: Deactivated successfully. Apr 13 20:11:46.323163 systemd[1]: run-containerd-runc-k8s.io-add094a7b15b0624f8aaef11a1ca7d2555856ea7aa9a2751164475a5413e52ec-runc.raGhJS.mount: Deactivated successfully. Apr 13 20:11:59.134181 systemd[1]: Started sshd@7-172.239.193.192:22-50.85.169.122:36840.service - OpenSSH per-connection server daemon (50.85.169.122:36840). Apr 13 20:11:59.856914 sshd[5943]: Accepted publickey for core from 50.85.169.122 port 36840 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:11:59.859110 sshd[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:59.865992 systemd-logind[1439]: New session 8 of user core. Apr 13 20:11:59.872385 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:12:00.441319 sshd[5943]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:00.446605 systemd[1]: sshd@7-172.239.193.192:22-50.85.169.122:36840.service: Deactivated successfully. Apr 13 20:12:00.450786 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:12:00.452120 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:12:00.453381 systemd-logind[1439]: Removed session 8. Apr 13 20:12:01.427700 kubelet[2546]: E0413 20:12:01.427567 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:12:05.567131 systemd[1]: Started sshd@8-172.239.193.192:22-50.85.169.122:40730.service - OpenSSH per-connection server daemon (50.85.169.122:40730). Apr 13 20:12:06.290432 sshd[5956]: Accepted publickey for core from 50.85.169.122 port 40730 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:06.292179 sshd[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:06.296756 systemd-logind[1439]: New session 9 of user core. Apr 13 20:12:06.302014 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:12:06.858270 sshd[5956]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:06.862333 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:12:06.863506 systemd[1]: sshd@8-172.239.193.192:22-50.85.169.122:40730.service: Deactivated successfully. Apr 13 20:12:06.865582 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:12:06.866444 systemd-logind[1439]: Removed session 9. Apr 13 20:12:11.990258 systemd[1]: Started sshd@9-172.239.193.192:22-50.85.169.122:60452.service - OpenSSH per-connection server daemon (50.85.169.122:60452). Apr 13 20:12:12.709924 sshd[5990]: Accepted publickey for core from 50.85.169.122 port 60452 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:12.711294 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:12.717026 systemd-logind[1439]: New session 10 of user core. Apr 13 20:12:12.725054 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:12:13.278336 sshd[5990]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:13.283360 systemd[1]: sshd@9-172.239.193.192:22-50.85.169.122:60452.service: Deactivated successfully. Apr 13 20:12:13.285810 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:12:13.286646 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:12:13.288032 systemd-logind[1439]: Removed session 10. Apr 13 20:12:13.400016 systemd[1]: Started sshd@10-172.239.193.192:22-50.85.169.122:60458.service - OpenSSH per-connection server daemon (50.85.169.122:60458). Apr 13 20:12:14.114908 sshd[6004]: Accepted publickey for core from 50.85.169.122 port 60458 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:14.116341 sshd[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:14.122793 systemd-logind[1439]: New session 11 of user core. Apr 13 20:12:14.132036 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:12:14.705766 sshd[6004]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:14.711756 systemd[1]: sshd@10-172.239.193.192:22-50.85.169.122:60458.service: Deactivated successfully. Apr 13 20:12:14.714430 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:12:14.715605 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:12:14.717037 systemd-logind[1439]: Removed session 11. Apr 13 20:12:14.827044 systemd[1]: Started sshd@11-172.239.193.192:22-50.85.169.122:60468.service - OpenSSH per-connection server daemon (50.85.169.122:60468). Apr 13 20:12:15.544471 sshd[6034]: Accepted publickey for core from 50.85.169.122 port 60468 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:15.546396 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:15.552033 systemd-logind[1439]: New session 12 of user core. Apr 13 20:12:15.558031 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:12:16.109317 sshd[6034]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:16.114810 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:12:16.115155 systemd[1]: sshd@11-172.239.193.192:22-50.85.169.122:60468.service: Deactivated successfully. Apr 13 20:12:16.118486 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:12:16.121191 systemd-logind[1439]: Removed session 12. Apr 13 20:12:18.428269 kubelet[2546]: E0413 20:12:18.427800 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:12:18.646647 systemd[1]: run-containerd-runc-k8s.io-6fa8ecd0c3cf6f7ba76097a63435a4338972c3a5085549e318e9064e34d1347a-runc.VUDWVV.mount: Deactivated successfully. Apr 13 20:12:21.232780 systemd[1]: Started sshd@12-172.239.193.192:22-50.85.169.122:60954.service - OpenSSH per-connection server daemon (50.85.169.122:60954). Apr 13 20:12:21.959920 sshd[6088]: Accepted publickey for core from 50.85.169.122 port 60954 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:21.961035 sshd[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:21.965724 systemd-logind[1439]: New session 13 of user core. Apr 13 20:12:21.971063 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:12:22.540013 sshd[6088]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:22.545072 systemd[1]: sshd@12-172.239.193.192:22-50.85.169.122:60954.service: Deactivated successfully. Apr 13 20:12:22.547530 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:12:22.548582 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:12:22.549639 systemd-logind[1439]: Removed session 13. Apr 13 20:12:22.664470 systemd[1]: Started sshd@13-172.239.193.192:22-50.85.169.122:60970.service - OpenSSH per-connection server daemon (50.85.169.122:60970). Apr 13 20:12:23.386693 sshd[6101]: Accepted publickey for core from 50.85.169.122 port 60970 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:23.387489 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:23.393758 systemd-logind[1439]: New session 14 of user core. Apr 13 20:12:23.401032 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:12:24.114145 sshd[6101]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:24.118178 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:12:24.118995 systemd[1]: sshd@13-172.239.193.192:22-50.85.169.122:60970.service: Deactivated successfully. Apr 13 20:12:24.121891 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:12:24.123202 systemd-logind[1439]: Removed session 14. Apr 13 20:12:24.243170 systemd[1]: Started sshd@14-172.239.193.192:22-50.85.169.122:60982.service - OpenSSH per-connection server daemon (50.85.169.122:60982). Apr 13 20:12:24.972904 sshd[6111]: Accepted publickey for core from 50.85.169.122 port 60982 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:24.974111 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:24.979940 systemd-logind[1439]: New session 15 of user core. Apr 13 20:12:24.983047 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:12:25.958134 sshd[6111]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:25.962314 systemd[1]: sshd@14-172.239.193.192:22-50.85.169.122:60982.service: Deactivated successfully. Apr 13 20:12:25.965333 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:12:25.966636 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:12:25.967826 systemd-logind[1439]: Removed session 15. Apr 13 20:12:26.093379 systemd[1]: Started sshd@15-172.239.193.192:22-50.85.169.122:60994.service - OpenSSH per-connection server daemon (50.85.169.122:60994). Apr 13 20:12:26.803276 sshd[6136]: Accepted publickey for core from 50.85.169.122 port 60994 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:26.804914 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:26.808987 systemd-logind[1439]: New session 16 of user core. Apr 13 20:12:26.814004 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:12:27.470560 sshd[6136]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:27.476021 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:12:27.477014 systemd[1]: sshd@15-172.239.193.192:22-50.85.169.122:60994.service: Deactivated successfully. Apr 13 20:12:27.479488 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:12:27.481130 systemd-logind[1439]: Removed session 16. Apr 13 20:12:27.604317 systemd[1]: Started sshd@16-172.239.193.192:22-50.85.169.122:32768.service - OpenSSH per-connection server daemon (50.85.169.122:32768). Apr 13 20:12:28.315770 sshd[6149]: Accepted publickey for core from 50.85.169.122 port 32768 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:28.317410 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:28.321714 systemd-logind[1439]: New session 17 of user core. Apr 13 20:12:28.325017 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:12:28.874204 sshd[6149]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:28.878422 systemd[1]: sshd@16-172.239.193.192:22-50.85.169.122:32768.service: Deactivated successfully. Apr 13 20:12:28.881371 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:12:28.882740 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:12:28.884103 systemd-logind[1439]: Removed session 17. Apr 13 20:12:32.428949 kubelet[2546]: E0413 20:12:32.427820 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:12:34.002109 systemd[1]: Started sshd@17-172.239.193.192:22-50.85.169.122:38002.service - OpenSSH per-connection server daemon (50.85.169.122:38002). Apr 13 20:12:34.137587 systemd[1]: run-containerd-runc-k8s.io-add094a7b15b0624f8aaef11a1ca7d2555856ea7aa9a2751164475a5413e52ec-runc.15XgjM.mount: Deactivated successfully. Apr 13 20:12:34.724901 sshd[6164]: Accepted publickey for core from 50.85.169.122 port 38002 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:34.726770 sshd[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:34.731956 systemd-logind[1439]: New session 18 of user core. Apr 13 20:12:34.737067 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:12:35.301346 sshd[6164]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:35.305149 systemd[1]: sshd@17-172.239.193.192:22-50.85.169.122:38002.service: Deactivated successfully. Apr 13 20:12:35.307857 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:12:35.309939 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:12:35.311321 systemd-logind[1439]: Removed session 18. Apr 13 20:12:37.427224 kubelet[2546]: E0413 20:12:37.427187 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:12:40.430510 systemd[1]: Started sshd@18-172.239.193.192:22-50.85.169.122:33518.service - OpenSSH per-connection server daemon (50.85.169.122:33518). Apr 13 20:12:41.139157 sshd[6214]: Accepted publickey for core from 50.85.169.122 port 33518 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:41.141298 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:41.147748 systemd-logind[1439]: New session 19 of user core. Apr 13 20:12:41.155097 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:12:41.713371 sshd[6214]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:41.717127 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:12:41.718432 systemd[1]: sshd@18-172.239.193.192:22-50.85.169.122:33518.service: Deactivated successfully. Apr 13 20:12:41.720967 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:12:41.721917 systemd-logind[1439]: Removed session 19. Apr 13 20:12:46.328071 systemd[1]: run-containerd-runc-k8s.io-add094a7b15b0624f8aaef11a1ca7d2555856ea7aa9a2751164475a5413e52ec-runc.hcO0Ap.mount: Deactivated successfully. Apr 13 20:12:46.428053 kubelet[2546]: E0413 20:12:46.427997 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Apr 13 20:12:46.849325 systemd[1]: Started sshd@19-172.239.193.192:22-50.85.169.122:33530.service - OpenSSH per-connection server daemon (50.85.169.122:33530). Apr 13 20:12:47.588776 sshd[6264]: Accepted publickey for core from 50.85.169.122 port 33530 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:47.591120 sshd[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:47.596716 systemd-logind[1439]: New session 20 of user core. Apr 13 20:12:47.602014 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:12:48.156735 sshd[6264]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:48.162770 systemd[1]: sshd@19-172.239.193.192:22-50.85.169.122:33530.service: Deactivated successfully. Apr 13 20:12:48.166272 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:12:48.167632 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:12:48.169971 systemd-logind[1439]: Removed session 20. Apr 13 20:12:53.283490 systemd[1]: Started sshd@20-172.239.193.192:22-50.85.169.122:47922.service - OpenSSH per-connection server daemon (50.85.169.122:47922). Apr 13 20:12:54.005022 sshd[6301]: Accepted publickey for core from 50.85.169.122 port 47922 ssh2: RSA SHA256:hiF6wWKr5iOn7uiUEHMt8X8qG6ChlO8ybswaMaPOcRw Apr 13 20:12:54.006606 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:12:54.011269 systemd-logind[1439]: New session 21 of user core. Apr 13 20:12:54.020008 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:12:54.588721 sshd[6301]: pam_unix(sshd:session): session closed for user core Apr 13 20:12:54.593422 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:12:54.594402 systemd[1]: sshd@20-172.239.193.192:22-50.85.169.122:47922.service: Deactivated successfully. Apr 13 20:12:54.597239 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:12:54.598282 systemd-logind[1439]: Removed session 21. Apr 13 20:12:57.427827 kubelet[2546]: E0413 20:12:57.427340 2546 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22"