Jul 6 23:55:05.907279 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:55:05.907308 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:05.907324 kernel: BIOS-provided physical RAM map: Jul 6 23:55:05.907333 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:55:05.907341 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:55:05.907349 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:55:05.907359 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 6 23:55:05.907368 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 6 23:55:05.907377 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 6 23:55:05.907390 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 6 23:55:05.907398 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:55:05.907407 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:55:05.907421 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:55:05.907430 kernel: NX (Execute Disable) protection: active Jul 6 23:55:05.907441 kernel: APIC: Static calls initialized Jul 6 23:55:05.907457 kernel: SMBIOS 2.8 present. Jul 6 23:55:05.907467 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 6 23:55:05.907476 kernel: Hypervisor detected: KVM Jul 6 23:55:05.907486 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:55:05.907495 kernel: kvm-clock: using sched offset of 2929016592 cycles Jul 6 23:55:05.907504 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:55:05.907514 kernel: tsc: Detected 2794.748 MHz processor Jul 6 23:55:05.907525 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:55:05.907534 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:55:05.907544 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 6 23:55:05.907558 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:55:05.907579 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:55:05.907589 kernel: Using GB pages for direct mapping Jul 6 23:55:05.907599 kernel: ACPI: Early table checksum verification disabled Jul 6 23:55:05.907637 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 6 23:55:05.907648 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:05.907657 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:05.907667 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:05.907681 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 6 23:55:05.907690 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:05.907706 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:05.907731 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:05.907740 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:05.907750 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 6 23:55:05.907760 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 6 23:55:05.907776 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 6 23:55:05.907790 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 6 23:55:05.907800 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 6 23:55:05.907810 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 6 23:55:05.907820 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 6 23:55:05.907830 kernel: No NUMA configuration found Jul 6 23:55:05.907841 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 6 23:55:05.907851 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 6 23:55:05.907865 kernel: Zone ranges: Jul 6 23:55:05.907875 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:55:05.907886 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 6 23:55:05.907896 kernel: Normal empty Jul 6 23:55:05.907905 kernel: Movable zone start for each node Jul 6 23:55:05.907915 kernel: Early memory node ranges Jul 6 23:55:05.907925 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:55:05.907936 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 6 23:55:05.907946 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 6 23:55:05.907960 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:55:05.907974 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:55:05.907984 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 6 23:55:05.907995 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:55:05.908005 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:55:05.908015 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:55:05.908025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:55:05.908035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:55:05.908045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:55:05.908060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:55:05.908070 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:55:05.908080 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:55:05.908090 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:55:05.908099 kernel: TSC deadline timer available Jul 6 23:55:05.908109 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 6 23:55:05.908119 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:55:05.908129 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 6 23:55:05.908142 kernel: kvm-guest: setup PV sched yield Jul 6 23:55:05.908156 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 6 23:55:05.908167 kernel: Booting paravirtualized kernel on KVM Jul 6 23:55:05.908177 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:55:05.908187 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 6 23:55:05.908197 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 6 23:55:05.908207 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 6 23:55:05.908217 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 6 23:55:05.908227 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:55:05.908236 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:55:05.908252 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:05.908262 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:55:05.908272 kernel: random: crng init done Jul 6 23:55:05.908282 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:55:05.908292 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:55:05.908301 kernel: Fallback order for Node 0: 0 Jul 6 23:55:05.908311 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 6 23:55:05.908320 kernel: Policy zone: DMA32 Jul 6 23:55:05.908334 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:55:05.908345 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 136900K reserved, 0K cma-reserved) Jul 6 23:55:05.908355 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:55:05.908365 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:55:05.908375 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:55:05.908385 kernel: Dynamic Preempt: voluntary Jul 6 23:55:05.908395 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:55:05.908411 kernel: rcu: RCU event tracing is enabled. Jul 6 23:55:05.908422 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:55:05.908436 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:55:05.908446 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:55:05.908456 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:55:05.908466 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:55:05.908480 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:55:05.908490 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 6 23:55:05.908500 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:55:05.908510 kernel: Console: colour VGA+ 80x25 Jul 6 23:55:05.908520 kernel: printk: console [ttyS0] enabled Jul 6 23:55:05.908530 kernel: ACPI: Core revision 20230628 Jul 6 23:55:05.908544 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:55:05.908555 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:55:05.908574 kernel: x2apic enabled Jul 6 23:55:05.908585 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:55:05.908595 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 6 23:55:05.908606 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 6 23:55:05.908616 kernel: kvm-guest: setup PV IPIs Jul 6 23:55:05.908641 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:55:05.908652 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:55:05.908663 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 6 23:55:05.908674 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:55:05.908688 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:55:05.908699 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:55:05.908745 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:55:05.908757 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:55:05.908769 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:55:05.908784 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 6 23:55:05.908795 kernel: RETBleed: Mitigation: untrained return thunk Jul 6 23:55:05.908809 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:55:05.908820 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:55:05.908831 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 6 23:55:05.908843 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 6 23:55:05.908854 kernel: x86/bugs: return thunk changed Jul 6 23:55:05.908865 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 6 23:55:05.908879 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:55:05.908890 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:55:05.908901 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:55:05.908912 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:55:05.908923 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:55:05.908934 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:55:05.908945 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:55:05.908956 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:55:05.908967 kernel: landlock: Up and running. Jul 6 23:55:05.908981 kernel: SELinux: Initializing. Jul 6 23:55:05.908992 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:55:05.909003 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:55:05.909014 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 6 23:55:05.909025 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:55:05.909037 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:55:05.909048 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:55:05.909059 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:55:05.909073 kernel: ... version: 0 Jul 6 23:55:05.909087 kernel: ... bit width: 48 Jul 6 23:55:05.909098 kernel: ... generic registers: 6 Jul 6 23:55:05.909109 kernel: ... value mask: 0000ffffffffffff Jul 6 23:55:05.909121 kernel: ... max period: 00007fffffffffff Jul 6 23:55:05.909131 kernel: ... fixed-purpose events: 0 Jul 6 23:55:05.909141 kernel: ... event mask: 000000000000003f Jul 6 23:55:05.909152 kernel: signal: max sigframe size: 1776 Jul 6 23:55:05.909162 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:55:05.909173 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:55:05.909187 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:55:05.909197 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:55:05.909208 kernel: .... node #0, CPUs: #1 #2 #3 Jul 6 23:55:05.909218 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:55:05.909228 kernel: smpboot: Max logical packages: 1 Jul 6 23:55:05.909239 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 6 23:55:05.909249 kernel: devtmpfs: initialized Jul 6 23:55:05.909260 kernel: x86/mm: Memory block size: 128MB Jul 6 23:55:05.909270 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:55:05.909281 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:55:05.909295 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:55:05.909306 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:55:05.909316 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:55:05.909327 kernel: audit: type=2000 audit(1751846104.319:1): state=initialized audit_enabled=0 res=1 Jul 6 23:55:05.909337 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:55:05.909348 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:55:05.909358 kernel: cpuidle: using governor menu Jul 6 23:55:05.909369 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:55:05.909379 kernel: dca service started, version 1.12.1 Jul 6 23:55:05.909395 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 6 23:55:05.909408 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 6 23:55:05.909419 kernel: PCI: Using configuration type 1 for base access Jul 6 23:55:05.909432 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:55:05.909442 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:55:05.909453 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:55:05.909467 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:55:05.909478 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:55:05.909492 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:55:05.909502 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:55:05.909513 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:55:05.909523 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:55:05.909534 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:55:05.909544 kernel: ACPI: Interpreter enabled Jul 6 23:55:05.909554 kernel: ACPI: PM: (supports S0 S3 S5) Jul 6 23:55:05.909573 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:55:05.909584 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:55:05.909595 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:55:05.909610 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:55:05.909621 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:55:05.909907 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:55:05.910093 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:55:05.910261 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:55:05.910278 kernel: PCI host bridge to bus 0000:00 Jul 6 23:55:05.910461 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:55:05.910640 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:55:05.910822 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:55:05.910976 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 6 23:55:05.911121 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:05.911261 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 6 23:55:05.911403 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:55:05.911625 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:55:05.911878 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 6 23:55:05.912040 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 6 23:55:05.912178 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 6 23:55:05.912303 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 6 23:55:05.912434 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:55:05.912589 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 6 23:55:05.912755 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 6 23:55:05.912908 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 6 23:55:05.913035 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 6 23:55:05.913192 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:55:05.913320 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 6 23:55:05.913445 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 6 23:55:05.913588 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 6 23:55:05.913769 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:55:05.913930 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 6 23:55:05.914060 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 6 23:55:05.914185 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 6 23:55:05.914311 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 6 23:55:05.914457 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:55:05.914592 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:55:05.914764 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:55:05.914899 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 6 23:55:05.915024 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 6 23:55:05.915170 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:55:05.915296 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 6 23:55:05.915306 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:55:05.915318 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:55:05.915326 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:55:05.915334 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:55:05.915341 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:55:05.915349 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:55:05.915357 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:55:05.915364 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:55:05.915372 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:55:05.915379 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:55:05.915389 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:55:05.915397 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:55:05.915406 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:55:05.915416 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:55:05.915425 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:55:05.915435 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:55:05.915444 kernel: iommu: Default domain type: Translated Jul 6 23:55:05.915454 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:55:05.915463 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:55:05.915476 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:55:05.915485 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:55:05.915494 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 6 23:55:05.915658 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:55:05.915810 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:55:05.915939 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:55:05.915949 kernel: vgaarb: loaded Jul 6 23:55:05.915957 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:55:05.915970 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:55:05.915978 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:55:05.915985 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:55:05.915993 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:55:05.916001 kernel: pnp: PnP ACPI init Jul 6 23:55:05.916148 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 6 23:55:05.916160 kernel: pnp: PnP ACPI: found 6 devices Jul 6 23:55:05.916168 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:55:05.916176 kernel: NET: Registered PF_INET protocol family Jul 6 23:55:05.916188 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:55:05.916196 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:55:05.916204 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:55:05.916212 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:55:05.916220 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:55:05.916228 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:55:05.916236 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:55:05.916244 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:55:05.916255 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:55:05.916263 kernel: NET: Registered PF_XDP protocol family Jul 6 23:55:05.916381 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:55:05.916496 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:55:05.916622 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:55:05.916768 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 6 23:55:05.916889 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:05.917004 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 6 23:55:05.917014 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:55:05.917027 kernel: Initialise system trusted keyrings Jul 6 23:55:05.917035 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:55:05.917043 kernel: Key type asymmetric registered Jul 6 23:55:05.917050 kernel: Asymmetric key parser 'x509' registered Jul 6 23:55:05.917058 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:55:05.917066 kernel: io scheduler mq-deadline registered Jul 6 23:55:05.917074 kernel: io scheduler kyber registered Jul 6 23:55:05.917081 kernel: io scheduler bfq registered Jul 6 23:55:05.917089 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:55:05.917100 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:55:05.917108 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:55:05.917115 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 6 23:55:05.917123 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:55:05.917131 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:55:05.917138 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:55:05.917146 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:55:05.917154 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:55:05.917309 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 6 23:55:05.917325 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:55:05.917444 kernel: rtc_cmos 00:04: registered as rtc0 Jul 6 23:55:05.917572 kernel: rtc_cmos 00:04: setting system clock to 2025-07-06T23:55:05 UTC (1751846105) Jul 6 23:55:05.917693 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 6 23:55:05.917703 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:55:05.917728 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:55:05.917739 kernel: Segment Routing with IPv6 Jul 6 23:55:05.917749 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:55:05.917766 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:55:05.917776 kernel: Key type dns_resolver registered Jul 6 23:55:05.917787 kernel: IPI shorthand broadcast: enabled Jul 6 23:55:05.917797 kernel: sched_clock: Marking stable (1023002098, 102128400)->(1138526895, -13396397) Jul 6 23:55:05.917805 kernel: registered taskstats version 1 Jul 6 23:55:05.917812 kernel: Loading compiled-in X.509 certificates Jul 6 23:55:05.917820 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:55:05.917828 kernel: Key type .fscrypt registered Jul 6 23:55:05.917836 kernel: Key type fscrypt-provisioning registered Jul 6 23:55:05.917847 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:55:05.917854 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:55:05.917862 kernel: ima: No architecture policies found Jul 6 23:55:05.917870 kernel: clk: Disabling unused clocks Jul 6 23:55:05.917877 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:55:05.917885 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:55:05.917893 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:55:05.917901 kernel: Run /init as init process Jul 6 23:55:05.917908 kernel: with arguments: Jul 6 23:55:05.917919 kernel: /init Jul 6 23:55:05.917926 kernel: with environment: Jul 6 23:55:05.917934 kernel: HOME=/ Jul 6 23:55:05.917941 kernel: TERM=linux Jul 6 23:55:05.917948 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:55:05.917958 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:05.917968 systemd[1]: Detected virtualization kvm. Jul 6 23:55:05.917979 systemd[1]: Detected architecture x86-64. Jul 6 23:55:05.917987 systemd[1]: Running in initrd. Jul 6 23:55:05.917995 systemd[1]: No hostname configured, using default hostname. Jul 6 23:55:05.918003 systemd[1]: Hostname set to . Jul 6 23:55:05.918011 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:55:05.918019 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:55:05.918027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:05.918035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:05.918047 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:55:05.918055 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:05.918077 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:55:05.918088 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:55:05.918098 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:55:05.918109 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:55:05.918118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:05.918126 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:05.918134 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:05.918143 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:05.918151 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:05.918159 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:05.918167 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:05.918176 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:05.918187 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:55:05.918195 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:55:05.918204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:05.918212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:05.918221 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:05.918229 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:05.918238 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:55:05.918246 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:05.918257 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:55:05.918265 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:55:05.918273 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:05.918282 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:05.918290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:05.918299 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:05.918307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:05.918315 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:55:05.918327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:55:05.918356 systemd-journald[192]: Collecting audit messages is disabled. Jul 6 23:55:05.918379 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:05.918388 systemd-journald[192]: Journal started Jul 6 23:55:05.918411 systemd-journald[192]: Runtime Journal (/run/log/journal/0e7751e4e31844e9a2874d72650da55d) is 6.0M, max 48.4M, 42.3M free. Jul 6 23:55:05.905512 systemd-modules-load[193]: Inserted module 'overlay' Jul 6 23:55:05.943668 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:55:05.943686 kernel: Bridge firewalling registered Jul 6 23:55:05.943699 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:05.932623 systemd-modules-load[193]: Inserted module 'br_netfilter' Jul 6 23:55:05.945505 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:05.966905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:05.969071 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:05.969814 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:05.972697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:05.977875 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:05.981968 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:05.986001 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:05.991866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:05.994075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:05.999491 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:06.002780 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:55:06.021940 dracut-cmdline[231]: dracut-dracut-053 Jul 6 23:55:06.025282 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:06.027706 systemd-resolved[219]: Positive Trust Anchors: Jul 6 23:55:06.027739 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:06.027781 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:06.030314 systemd-resolved[219]: Defaulting to hostname 'linux'. Jul 6 23:55:06.031519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:06.032207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:06.118744 kernel: SCSI subsystem initialized Jul 6 23:55:06.128737 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:55:06.138739 kernel: iscsi: registered transport (tcp) Jul 6 23:55:06.159740 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:55:06.159770 kernel: QLogic iSCSI HBA Driver Jul 6 23:55:06.204428 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:06.213846 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:55:06.238450 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:55:06.238511 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:55:06.238529 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:55:06.278738 kernel: raid6: avx2x4 gen() 30027 MB/s Jul 6 23:55:06.295736 kernel: raid6: avx2x2 gen() 30602 MB/s Jul 6 23:55:06.312765 kernel: raid6: avx2x1 gen() 25917 MB/s Jul 6 23:55:06.312792 kernel: raid6: using algorithm avx2x2 gen() 30602 MB/s Jul 6 23:55:06.330774 kernel: raid6: .... xor() 19835 MB/s, rmw enabled Jul 6 23:55:06.330796 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:55:06.350742 kernel: xor: automatically using best checksumming function avx Jul 6 23:55:06.503750 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:55:06.515769 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:06.520916 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:06.536103 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 6 23:55:06.540956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:06.542462 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:55:06.560810 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jul 6 23:55:06.591760 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:06.604894 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:06.670250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:06.679459 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:55:06.693858 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:06.696819 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:06.699607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:06.701952 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:06.706752 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 6 23:55:06.710737 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:55:06.714330 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:55:06.712873 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:55:06.732953 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:06.734091 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:06.737787 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:55:06.737808 kernel: AES CTR mode by8 optimization enabled Jul 6 23:55:06.737819 kernel: libata version 3.00 loaded. Jul 6 23:55:06.741129 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:55:06.741175 kernel: GPT:9289727 != 19775487 Jul 6 23:55:06.741187 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:55:06.741197 kernel: GPT:9289727 != 19775487 Jul 6 23:55:06.741206 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:55:06.741216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:55:06.743835 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:06.751558 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:55:06.751817 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:55:06.751830 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:55:06.752079 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:55:06.745082 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:06.754795 kernel: scsi host0: ahci Jul 6 23:55:06.754975 kernel: scsi host1: ahci Jul 6 23:55:06.745144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:06.762771 kernel: scsi host2: ahci Jul 6 23:55:06.762957 kernel: scsi host3: ahci Jul 6 23:55:06.763120 kernel: scsi host4: ahci Jul 6 23:55:06.763277 kernel: scsi host5: ahci Jul 6 23:55:06.763438 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 6 23:55:06.763455 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 6 23:55:06.763465 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 6 23:55:06.763475 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 6 23:55:06.763485 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 6 23:55:06.763495 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 6 23:55:06.746877 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:06.753002 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:06.757745 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:06.774734 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (475) Jul 6 23:55:06.774764 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Jul 6 23:55:06.796895 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:55:06.815888 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:06.822105 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:55:06.828746 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:55:06.828827 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:55:06.835829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:55:06.847856 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:55:06.849688 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:06.861334 disk-uuid[554]: Primary Header is updated. Jul 6 23:55:06.861334 disk-uuid[554]: Secondary Entries is updated. Jul 6 23:55:06.861334 disk-uuid[554]: Secondary Header is updated. Jul 6 23:55:06.865737 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:55:06.869749 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:55:06.870325 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:07.081741 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:07.081823 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 6 23:55:07.081835 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 6 23:55:07.081846 kernel: ata3.00: applying bridge limits Jul 6 23:55:07.081856 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:07.081867 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:07.082744 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:07.083737 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:07.084739 kernel: ata3.00: configured for UDMA/100 Jul 6 23:55:07.084766 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:55:07.129239 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 6 23:55:07.129483 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:55:07.141759 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:55:07.937759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:55:07.938236 disk-uuid[557]: The operation has completed successfully. Jul 6 23:55:07.965548 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:55:07.965685 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:55:07.988938 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:55:07.992146 sh[590]: Success Jul 6 23:55:08.004730 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:55:08.037547 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:55:08.052212 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:55:08.055645 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:55:08.070978 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:55:08.071011 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:08.071023 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:55:08.071989 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:55:08.072735 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:55:08.078612 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:55:08.079473 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:55:08.087884 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:55:08.088674 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:55:08.100050 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:08.100089 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:08.100100 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:55:08.103856 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:55:08.113593 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:55:08.115787 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:08.206874 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:08.249859 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:08.273125 systemd-networkd[769]: lo: Link UP Jul 6 23:55:08.273140 systemd-networkd[769]: lo: Gained carrier Jul 6 23:55:08.275933 systemd-networkd[769]: Enumeration completed Jul 6 23:55:08.276039 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:08.276442 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:08.276447 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:08.277734 systemd-networkd[769]: eth0: Link UP Jul 6 23:55:08.277739 systemd-networkd[769]: eth0: Gained carrier Jul 6 23:55:08.277748 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:08.278812 systemd[1]: Reached target network.target - Network. Jul 6 23:55:08.302763 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:55:08.380087 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:55:08.411874 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:55:08.621647 ignition[774]: Ignition 2.19.0 Jul 6 23:55:08.621662 ignition[774]: Stage: fetch-offline Jul 6 23:55:08.621745 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:08.621761 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:08.621883 ignition[774]: parsed url from cmdline: "" Jul 6 23:55:08.621887 ignition[774]: no config URL provided Jul 6 23:55:08.621893 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:08.621903 ignition[774]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:08.621935 ignition[774]: op(1): [started] loading QEMU firmware config module Jul 6 23:55:08.621952 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:55:08.635452 ignition[774]: op(1): [finished] loading QEMU firmware config module Jul 6 23:55:08.675600 ignition[774]: parsing config with SHA512: 99d7e68730f9a75e8d115497e12f30de79ad32531f1a91fe1b1b96cf103e6af58e1c23dbf21e6e57c44281233b56eba24ab4f6afa78497042b0ee09f93a80578 Jul 6 23:55:08.684938 unknown[774]: fetched base config from "system" Jul 6 23:55:08.684960 unknown[774]: fetched user config from "qemu" Jul 6 23:55:08.685568 ignition[774]: fetch-offline: fetch-offline passed Jul 6 23:55:08.685675 ignition[774]: Ignition finished successfully Jul 6 23:55:08.688562 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:08.690155 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:55:08.778967 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:55:08.807938 ignition[783]: Ignition 2.19.0 Jul 6 23:55:08.807950 ignition[783]: Stage: kargs Jul 6 23:55:08.808135 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:08.808147 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:08.808992 ignition[783]: kargs: kargs passed Jul 6 23:55:08.809051 ignition[783]: Ignition finished successfully Jul 6 23:55:08.813020 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:55:08.824864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:55:08.849613 ignition[791]: Ignition 2.19.0 Jul 6 23:55:08.849625 ignition[791]: Stage: disks Jul 6 23:55:08.849827 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:08.849841 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:08.852448 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:55:08.850648 ignition[791]: disks: disks passed Jul 6 23:55:08.854903 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:08.850700 ignition[791]: Ignition finished successfully Jul 6 23:55:08.856381 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:08.858323 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:08.860338 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:08.861363 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:08.872929 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:55:08.906008 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:55:08.913881 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:55:08.921929 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:55:09.020731 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:55:09.020915 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:55:09.022695 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:09.042918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:09.045080 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:55:09.046547 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:55:09.046605 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:55:09.054055 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Jul 6 23:55:09.054087 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:09.046638 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:09.060779 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:09.060803 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:55:09.060818 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:55:09.054040 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:55:09.059500 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:55:09.061859 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:09.103929 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:55:09.110477 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:55:09.116434 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:55:09.120767 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:55:09.225738 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:09.235828 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:55:09.237587 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:55:09.244124 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:55:09.245300 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:09.265623 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:55:09.280709 ignition[923]: INFO : Ignition 2.19.0 Jul 6 23:55:09.280709 ignition[923]: INFO : Stage: mount Jul 6 23:55:09.282703 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:09.282703 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:09.282703 ignition[923]: INFO : mount: mount passed Jul 6 23:55:09.282703 ignition[923]: INFO : Ignition finished successfully Jul 6 23:55:09.284556 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:55:09.295827 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:55:09.303948 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:09.319805 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (936) Jul 6 23:55:09.319880 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:09.322272 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:09.322329 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:55:09.326753 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:55:09.329685 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:09.391587 ignition[953]: INFO : Ignition 2.19.0 Jul 6 23:55:09.391587 ignition[953]: INFO : Stage: files Jul 6 23:55:09.393340 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:09.393340 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:09.395997 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:55:09.397797 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:55:09.397797 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:55:09.401503 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:55:09.402944 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:55:09.404944 unknown[953]: wrote ssh authorized keys file for user: core Jul 6 23:55:09.406156 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:55:09.407670 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:55:09.409628 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 6 23:55:09.452501 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:55:09.656879 systemd-networkd[769]: eth0: Gained IPv6LL Jul 6 23:55:09.711804 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 6 23:55:09.711804 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:09.715689 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 6 23:55:10.380735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:55:11.127945 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 6 23:55:11.127945 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:55:11.131939 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:11.131939 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:11.131939 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:55:11.131939 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 6 23:55:11.131939 ignition[953]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:55:11.131939 ignition[953]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:55:11.131939 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 6 23:55:11.131939 ignition[953]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:55:11.158657 ignition[953]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:55:11.166154 ignition[953]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:55:11.167773 ignition[953]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:55:11.167773 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:11.167773 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:11.167773 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:11.167773 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:11.167773 ignition[953]: INFO : files: files passed Jul 6 23:55:11.167773 ignition[953]: INFO : Ignition finished successfully Jul 6 23:55:11.169460 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:55:11.179897 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:55:11.182668 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:55:11.184573 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:55:11.184687 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:55:11.193336 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:55:11.196473 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:11.196473 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:11.199685 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:11.202813 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:11.205355 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:55:11.223903 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:55:11.262088 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:55:11.262262 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:55:11.263532 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:55:11.267147 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:55:11.267467 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:55:11.280886 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:55:11.312072 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:11.326897 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:55:11.341905 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:11.344495 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:11.348341 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:55:11.350572 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:55:11.352973 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:11.356037 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:55:11.359047 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:55:11.361309 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:55:11.364017 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:11.366845 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:11.369577 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:55:11.372115 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:11.374851 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:55:11.377143 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:55:11.379165 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:55:11.381085 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:55:11.382293 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:11.385016 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:11.387648 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:11.390403 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:55:11.391654 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:11.394525 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:55:11.395768 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:11.398410 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:55:11.399607 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:11.402413 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:55:11.404493 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:55:11.409790 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:11.412963 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:55:11.415111 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:55:11.417467 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:55:11.418548 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:11.420901 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:55:11.421975 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:11.424361 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:55:11.425737 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:11.428686 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:55:11.429910 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:55:11.450886 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:55:11.453577 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:55:11.455322 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:55:11.456369 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:11.458674 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:55:11.459612 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:11.465633 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:55:11.465803 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:55:11.488294 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:55:11.531180 ignition[1008]: INFO : Ignition 2.19.0 Jul 6 23:55:11.531180 ignition[1008]: INFO : Stage: umount Jul 6 23:55:11.533395 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:11.533395 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:11.533395 ignition[1008]: INFO : umount: umount passed Jul 6 23:55:11.533395 ignition[1008]: INFO : Ignition finished successfully Jul 6 23:55:11.535002 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:55:11.535146 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:55:11.537104 systemd[1]: Stopped target network.target - Network. Jul 6 23:55:11.538820 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:55:11.538880 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:55:11.541057 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:55:11.541108 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:55:11.542300 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:55:11.542348 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:55:11.544466 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:55:11.544514 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:11.546932 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:55:11.549022 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:55:11.552821 systemd-networkd[769]: eth0: DHCPv6 lease lost Jul 6 23:55:11.555541 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:55:11.555681 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:55:11.557290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:55:11.557337 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:11.570838 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:55:11.570960 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:55:11.571027 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:11.574572 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:11.577226 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:55:11.577363 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:55:11.594127 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:55:11.594254 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:11.595703 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:55:11.595777 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:11.599028 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:55:11.599079 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:11.601891 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:55:11.602078 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:11.603125 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:55:11.603242 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:55:11.606327 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:55:11.606404 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:11.608787 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:55:11.608830 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:11.610949 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:55:11.611001 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:11.612844 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:55:11.612897 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:11.613616 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:11.613662 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:11.629885 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:55:11.629963 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:55:11.630019 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:11.633190 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:55:11.633244 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:11.635514 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:55:11.635566 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:11.637808 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:11.637859 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:11.640350 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:55:11.640470 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:55:11.683197 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:55:11.683331 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:55:11.685284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:55:11.686965 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:55:11.687018 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:11.700897 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:55:11.708480 systemd[1]: Switching root. Jul 6 23:55:11.739248 systemd-journald[192]: Journal stopped Jul 6 23:55:12.894566 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jul 6 23:55:12.894638 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:55:12.894659 kernel: SELinux: policy capability open_perms=1 Jul 6 23:55:12.894675 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:55:12.894690 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:55:12.894706 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:55:12.894895 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:55:12.894910 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:55:12.894925 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:55:12.894953 kernel: audit: type=1403 audit(1751846112.114:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:55:12.894972 systemd[1]: Successfully loaded SELinux policy in 44.275ms. Jul 6 23:55:12.895003 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.703ms. Jul 6 23:55:12.895017 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:12.895030 systemd[1]: Detected virtualization kvm. Jul 6 23:55:12.895043 systemd[1]: Detected architecture x86-64. Jul 6 23:55:12.895063 systemd[1]: Detected first boot. Jul 6 23:55:12.895080 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:55:12.895097 zram_generator::config[1052]: No configuration found. Jul 6 23:55:12.895118 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:55:12.895147 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:55:12.895160 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:55:12.895173 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:55:12.895192 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:55:12.895208 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:55:12.895224 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:55:12.895242 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:55:12.895258 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:55:12.895272 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:55:12.895285 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:55:12.895298 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:55:12.895310 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:12.895327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:12.895339 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:55:12.895351 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:55:12.895364 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:55:12.895377 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:12.895399 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:55:12.895413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:12.895426 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:55:12.895439 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:55:12.895460 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:12.895473 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:55:12.895485 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:12.895498 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:12.895511 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:12.895523 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:12.895535 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:55:12.895548 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:55:12.895567 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:12.895584 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:12.895600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:12.895616 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:55:12.895629 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:55:12.895642 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:55:12.895655 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:55:12.895667 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:12.895680 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:55:12.895696 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:55:12.895723 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:55:12.895738 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:55:12.895750 systemd[1]: Reached target machines.target - Containers. Jul 6 23:55:12.895765 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:55:12.895777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:12.895790 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:12.895803 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:55:12.895820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:12.895836 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:12.895857 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:12.895874 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:55:12.895887 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:12.895900 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:55:12.895913 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:55:12.895925 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:55:12.895938 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:55:12.895953 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:55:12.895966 kernel: loop: module loaded Jul 6 23:55:12.895979 kernel: fuse: init (API version 7.39) Jul 6 23:55:12.895992 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:12.896008 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:12.896024 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:55:12.896040 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:55:12.896077 systemd-journald[1122]: Collecting audit messages is disabled. Jul 6 23:55:12.896116 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:12.896132 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:55:12.896148 systemd-journald[1122]: Journal started Jul 6 23:55:12.896175 systemd-journald[1122]: Runtime Journal (/run/log/journal/0e7751e4e31844e9a2874d72650da55d) is 6.0M, max 48.4M, 42.3M free. Jul 6 23:55:12.657613 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:55:12.676076 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:55:12.897174 systemd[1]: Stopped verity-setup.service. Jul 6 23:55:12.676574 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:55:12.903155 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:12.903207 kernel: ACPI: bus type drm_connector registered Jul 6 23:55:12.909746 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:12.911008 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:55:12.912362 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:55:12.913761 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:55:12.915000 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:55:12.916357 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:55:12.917777 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:55:12.919202 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:55:12.920856 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:12.922633 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:55:12.922900 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:55:12.924623 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:12.924895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:12.926605 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:12.926958 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:12.928559 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:12.929137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:12.930936 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:55:12.931172 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:55:12.932945 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:12.933183 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:12.934820 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:12.936445 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:55:12.938207 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:55:12.960661 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:55:12.973835 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:55:12.976634 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:55:12.977908 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:55:12.977944 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:12.980584 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:55:12.983684 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:55:12.986466 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:55:12.987889 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:12.991485 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:55:12.998792 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:55:13.000371 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:13.007928 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:55:13.014126 systemd-journald[1122]: Time spent on flushing to /var/log/journal/0e7751e4e31844e9a2874d72650da55d is 32.162ms for 947 entries. Jul 6 23:55:13.014126 systemd-journald[1122]: System Journal (/var/log/journal/0e7751e4e31844e9a2874d72650da55d) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:55:13.088735 systemd-journald[1122]: Received client request to flush runtime journal. Jul 6 23:55:13.088852 kernel: loop0: detected capacity change from 0 to 142488 Jul 6 23:55:13.009567 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:13.012756 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:13.017506 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:55:13.022028 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:55:13.028424 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:55:13.031000 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:55:13.033174 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:55:13.036337 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:55:13.045065 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:55:13.056988 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:55:13.060949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:13.062833 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:13.074815 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:55:13.086353 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jul 6 23:55:13.095036 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:55:13.086373 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jul 6 23:55:13.090781 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:55:13.096100 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:13.109447 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:55:13.110949 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 6 23:55:13.115195 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:55:13.115946 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:55:13.210167 kernel: loop1: detected capacity change from 0 to 224512 Jul 6 23:55:13.233210 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:55:13.245313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:13.253734 kernel: loop2: detected capacity change from 0 to 140768 Jul 6 23:55:13.279824 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jul 6 23:55:13.279849 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jul 6 23:55:13.290735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:13.344749 kernel: loop3: detected capacity change from 0 to 142488 Jul 6 23:55:13.362751 kernel: loop4: detected capacity change from 0 to 224512 Jul 6 23:55:13.373740 kernel: loop5: detected capacity change from 0 to 140768 Jul 6 23:55:13.386384 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:55:13.387185 (sd-merge)[1195]: Merged extensions into '/usr'. Jul 6 23:55:13.431446 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:55:13.431505 systemd[1]: Reloading... Jul 6 23:55:13.482836 zram_generator::config[1220]: No configuration found. Jul 6 23:55:13.648359 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:55:13.649467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:13.701872 systemd[1]: Reloading finished in 269 ms. Jul 6 23:55:13.733819 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:55:13.873894 systemd[1]: Starting ensure-sysext.service... Jul 6 23:55:13.876083 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:13.880013 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:55:13.883897 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:55:13.883912 systemd[1]: Reloading... Jul 6 23:55:13.915940 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:55:13.916328 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:55:13.917539 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:55:13.918450 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jul 6 23:55:13.918611 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jul 6 23:55:13.924873 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:13.924968 systemd-tmpfiles[1258]: Skipping /boot Jul 6 23:55:13.941279 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:13.941430 systemd-tmpfiles[1258]: Skipping /boot Jul 6 23:55:13.942808 zram_generator::config[1288]: No configuration found. Jul 6 23:55:14.180465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:14.237432 systemd[1]: Reloading finished in 353 ms. Jul 6 23:55:14.264897 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:55:14.275296 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:14.284228 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:14.287645 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:55:14.290416 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:55:14.295637 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:14.299974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:14.304036 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:55:14.310285 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:14.310470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:14.313947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:14.317104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:14.319927 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:14.321097 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:14.323803 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:55:14.324854 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:14.325803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:14.326191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:14.331436 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:14.331640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:14.335171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:14.335456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:14.341700 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:14.342428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:14.343664 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jul 6 23:55:14.351097 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:14.354869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:14.359414 augenrules[1353]: No rules Jul 6 23:55:14.359950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:14.361227 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:14.361345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:14.362589 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:14.364376 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:55:14.366596 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:55:14.368546 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:55:14.370346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:14.371105 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:14.382369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:14.382559 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:14.384241 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:14.386239 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:14.386433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:14.407813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:14.408936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:14.409076 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:14.410934 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:55:14.413445 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:55:14.424395 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:14.424602 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:14.427451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:14.432249 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:14.438977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:14.443823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:14.445188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:14.445398 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:55:14.445517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:14.447467 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:55:14.451114 systemd[1]: Finished ensure-sysext.service. Jul 6 23:55:14.459059 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:55:14.460615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:14.461334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:14.462910 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:14.463507 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:14.465090 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:14.467041 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:14.471590 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:14.478279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:14.478492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:14.481610 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:55:14.488828 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:14.574342 systemd-resolved[1328]: Positive Trust Anchors: Jul 6 23:55:14.574725 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:14.574817 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:14.581836 systemd-resolved[1328]: Defaulting to hostname 'linux'. Jul 6 23:55:14.583838 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:14.585131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:14.591102 systemd-networkd[1380]: lo: Link UP Jul 6 23:55:14.591118 systemd-networkd[1380]: lo: Gained carrier Jul 6 23:55:14.593171 systemd-networkd[1380]: Enumeration completed Jul 6 23:55:14.593279 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:14.594589 systemd[1]: Reached target network.target - Network. Jul 6 23:55:14.595130 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:14.595135 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:14.597737 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:55:14.597875 systemd-networkd[1380]: eth0: Link UP Jul 6 23:55:14.597888 systemd-networkd[1380]: eth0: Gained carrier Jul 6 23:55:14.597900 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:14.602895 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:55:14.610744 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1367) Jul 6 23:55:14.613430 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:55:14.616771 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:55:14.622823 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:55:14.624863 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:55:14.624941 systemd-timesyncd[1402]: Initial clock synchronization to Sun 2025-07-06 23:55:14.288403 UTC. Jul 6 23:55:14.628408 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:55:14.629950 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:55:14.642870 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:55:14.643191 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:55:14.643434 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:55:14.648374 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:55:14.662044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:55:14.688734 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:55:14.732760 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:55:14.747233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:14.761985 kernel: kvm_amd: TSC scaling supported Jul 6 23:55:14.762074 kernel: kvm_amd: Nested Virtualization enabled Jul 6 23:55:14.762089 kernel: kvm_amd: Nested Paging enabled Jul 6 23:55:14.762138 kernel: kvm_amd: LBR virtualization supported Jul 6 23:55:14.763047 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 6 23:55:14.763076 kernel: kvm_amd: Virtual GIF supported Jul 6 23:55:14.788750 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:55:14.818447 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:55:14.826900 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:55:14.867294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:14.881985 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:14.952831 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:55:14.955209 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:14.956515 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:14.957796 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:55:14.959056 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:55:14.960508 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:55:14.961660 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:55:14.962889 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:55:14.964105 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:55:14.964135 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:14.965026 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:14.966932 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:55:14.969813 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:55:14.985916 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:55:14.988388 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:55:14.990047 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:55:14.991207 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:14.992291 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:14.993293 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:14.993323 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:14.994420 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:55:14.996598 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:55:15.000838 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:55:15.000933 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:15.004951 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:55:15.006054 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:55:15.008903 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:55:15.012824 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:55:15.015041 jq[1435]: false Jul 6 23:55:15.016872 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:55:15.019865 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:55:15.024093 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:55:15.028366 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:55:15.028875 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:55:15.031874 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:55:15.033965 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:55:15.038227 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:55:15.039333 dbus-daemon[1434]: [system] SELinux support is enabled Jul 6 23:55:15.039976 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:55:15.043645 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:55:15.044009 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:55:15.044639 extend-filesystems[1436]: Found loop3 Jul 6 23:55:15.044639 extend-filesystems[1436]: Found loop4 Jul 6 23:55:15.044639 extend-filesystems[1436]: Found loop5 Jul 6 23:55:15.044639 extend-filesystems[1436]: Found sr0 Jul 6 23:55:15.054447 extend-filesystems[1436]: Found vda Jul 6 23:55:15.054447 extend-filesystems[1436]: Found vda1 Jul 6 23:55:15.054447 extend-filesystems[1436]: Found vda2 Jul 6 23:55:15.054447 extend-filesystems[1436]: Found vda3 Jul 6 23:55:15.054447 extend-filesystems[1436]: Found usr Jul 6 23:55:15.054447 extend-filesystems[1436]: Found vda4 Jul 6 23:55:15.054447 extend-filesystems[1436]: Found vda6 Jul 6 23:55:15.054447 extend-filesystems[1436]: Found vda7 Jul 6 23:55:15.054447 extend-filesystems[1436]: Found vda9 Jul 6 23:55:15.054447 extend-filesystems[1436]: Checking size of /dev/vda9 Jul 6 23:55:15.069811 jq[1449]: true Jul 6 23:55:15.047592 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:55:15.047965 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:55:15.070339 jq[1453]: true Jul 6 23:55:15.083916 extend-filesystems[1436]: Resized partition /dev/vda9 Jul 6 23:55:15.075174 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:55:15.091812 update_engine[1447]: I20250706 23:55:15.073482 1447 main.cc:92] Flatcar Update Engine starting Jul 6 23:55:15.091812 update_engine[1447]: I20250706 23:55:15.074986 1447 update_check_scheduler.cc:74] Next update check in 11m7s Jul 6 23:55:15.093817 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:55:15.099806 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1367) Jul 6 23:55:15.099837 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:55:15.075841 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:55:15.076431 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:55:15.088789 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:55:15.088820 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:55:15.090313 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:55:15.090329 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:55:15.096254 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:55:15.103942 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:55:15.125292 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:55:15.125326 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:55:15.131769 tar[1451]: linux-amd64/LICENSE Jul 6 23:55:15.132678 tar[1451]: linux-amd64/helm Jul 6 23:55:15.134975 systemd-logind[1443]: New seat seat0. Jul 6 23:55:15.140474 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:55:15.162759 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:55:15.234791 locksmithd[1474]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:55:15.248680 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:55:15.248680 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:55:15.248680 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:55:15.256114 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Jul 6 23:55:15.251084 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:55:15.251365 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:55:15.258800 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:55:15.261907 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:55:15.264880 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:55:15.342533 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:55:15.370551 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:55:15.433162 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:55:15.446040 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:55:15.446343 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:55:15.454992 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:55:15.519170 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:55:15.527145 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:55:15.531083 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:55:15.532723 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:55:15.537748 containerd[1456]: time="2025-07-06T23:55:15.537030164Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:55:15.560976 containerd[1456]: time="2025-07-06T23:55:15.560906296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:15.563168 containerd[1456]: time="2025-07-06T23:55:15.563129734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:15.563168 containerd[1456]: time="2025-07-06T23:55:15.563159939Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:55:15.563257 containerd[1456]: time="2025-07-06T23:55:15.563175056Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:55:15.563520 containerd[1456]: time="2025-07-06T23:55:15.563400910Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:55:15.563520 containerd[1456]: time="2025-07-06T23:55:15.563424396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:15.563520 containerd[1456]: time="2025-07-06T23:55:15.563501756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:15.563520 containerd[1456]: time="2025-07-06T23:55:15.563513975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:15.563882 containerd[1456]: time="2025-07-06T23:55:15.563773517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:15.563882 containerd[1456]: time="2025-07-06T23:55:15.563794066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:15.563882 containerd[1456]: time="2025-07-06T23:55:15.563807677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:15.563882 containerd[1456]: time="2025-07-06T23:55:15.563817668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:15.563987 containerd[1456]: time="2025-07-06T23:55:15.563932634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:15.564225 containerd[1456]: time="2025-07-06T23:55:15.564204759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:15.564386 containerd[1456]: time="2025-07-06T23:55:15.564330351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:15.564386 containerd[1456]: time="2025-07-06T23:55:15.564348520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:55:15.564542 containerd[1456]: time="2025-07-06T23:55:15.564457189Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:55:15.564542 containerd[1456]: time="2025-07-06T23:55:15.564525874Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:55:15.817825 containerd[1456]: time="2025-07-06T23:55:15.817615164Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:55:15.817825 containerd[1456]: time="2025-07-06T23:55:15.817745718Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:55:15.818053 containerd[1456]: time="2025-07-06T23:55:15.817850903Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:55:15.818053 containerd[1456]: time="2025-07-06T23:55:15.817891100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:55:15.818053 containerd[1456]: time="2025-07-06T23:55:15.817935424Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:55:15.818398 containerd[1456]: time="2025-07-06T23:55:15.818344197Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:55:15.819114 containerd[1456]: time="2025-07-06T23:55:15.819070304Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:55:15.819260 containerd[1456]: time="2025-07-06T23:55:15.819237387Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:55:15.819260 containerd[1456]: time="2025-07-06T23:55:15.819256170Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:55:15.819310 containerd[1456]: time="2025-07-06T23:55:15.819272727Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:55:15.819310 containerd[1456]: time="2025-07-06T23:55:15.819292058Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:55:15.819353 containerd[1456]: time="2025-07-06T23:55:15.819314949Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:55:15.819353 containerd[1456]: time="2025-07-06T23:55:15.819327733Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:55:15.819353 containerd[1456]: time="2025-07-06T23:55:15.819345643Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:55:15.819405 containerd[1456]: time="2025-07-06T23:55:15.819365349Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:55:15.819405 containerd[1456]: time="2025-07-06T23:55:15.819378536Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:55:15.819405 containerd[1456]: time="2025-07-06T23:55:15.819393509Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:55:15.819519 containerd[1456]: time="2025-07-06T23:55:15.819405959Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:55:15.819519 containerd[1456]: time="2025-07-06T23:55:15.819458172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819519 containerd[1456]: time="2025-07-06T23:55:15.819491717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819519 containerd[1456]: time="2025-07-06T23:55:15.819515693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819665 containerd[1456]: time="2025-07-06T23:55:15.819540370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819665 containerd[1456]: time="2025-07-06T23:55:15.819635727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819665 containerd[1456]: time="2025-07-06T23:55:15.819652869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819764 containerd[1456]: time="2025-07-06T23:55:15.819671816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819764 containerd[1456]: time="2025-07-06T23:55:15.819707031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819764 containerd[1456]: time="2025-07-06T23:55:15.819739329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819764 containerd[1456]: time="2025-07-06T23:55:15.819753045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819764 containerd[1456]: time="2025-07-06T23:55:15.819764706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819874 containerd[1456]: time="2025-07-06T23:55:15.819776761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819874 containerd[1456]: time="2025-07-06T23:55:15.819792647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819874 containerd[1456]: time="2025-07-06T23:55:15.819808463Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:55:15.819874 containerd[1456]: time="2025-07-06T23:55:15.819832910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.819874 containerd[1456]: time="2025-07-06T23:55:15.819852952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.820100 containerd[1456]: time="2025-07-06T23:55:15.819876419Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:55:15.820100 containerd[1456]: time="2025-07-06T23:55:15.819994302Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:55:15.820100 containerd[1456]: time="2025-07-06T23:55:15.820055001Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:55:15.820100 containerd[1456]: time="2025-07-06T23:55:15.820083238Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:55:15.820218 containerd[1456]: time="2025-07-06T23:55:15.820107569Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:55:15.820218 containerd[1456]: time="2025-07-06T23:55:15.820128810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.820218 containerd[1456]: time="2025-07-06T23:55:15.820181571Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:55:15.820218 containerd[1456]: time="2025-07-06T23:55:15.820202706Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:55:15.820218 containerd[1456]: time="2025-07-06T23:55:15.820213581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:55:15.822689 containerd[1456]: time="2025-07-06T23:55:15.822582707Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:55:15.822689 containerd[1456]: time="2025-07-06T23:55:15.822680387Z" level=info msg="Connect containerd service" Jul 6 23:55:15.822689 containerd[1456]: time="2025-07-06T23:55:15.822755809Z" level=info msg="using legacy CRI server" Jul 6 23:55:15.822689 containerd[1456]: time="2025-07-06T23:55:15.822765100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:55:15.823191 containerd[1456]: time="2025-07-06T23:55:15.822967428Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:55:15.824328 containerd[1456]: time="2025-07-06T23:55:15.824272750Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:55:15.825483 containerd[1456]: time="2025-07-06T23:55:15.824472045Z" level=info msg="Start subscribing containerd event" Jul 6 23:55:15.825753 containerd[1456]: time="2025-07-06T23:55:15.825674907Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:55:15.825788 containerd[1456]: time="2025-07-06T23:55:15.825755215Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:55:15.826070 containerd[1456]: time="2025-07-06T23:55:15.825820385Z" level=info msg="Start recovering state" Jul 6 23:55:15.826070 containerd[1456]: time="2025-07-06T23:55:15.825898754Z" level=info msg="Start event monitor" Jul 6 23:55:15.826070 containerd[1456]: time="2025-07-06T23:55:15.825927702Z" level=info msg="Start snapshots syncer" Jul 6 23:55:15.826070 containerd[1456]: time="2025-07-06T23:55:15.825953338Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:55:15.826070 containerd[1456]: time="2025-07-06T23:55:15.825975481Z" level=info msg="Start streaming server" Jul 6 23:55:15.828400 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:55:15.829422 containerd[1456]: time="2025-07-06T23:55:15.829275490Z" level=info msg="containerd successfully booted in 0.293297s" Jul 6 23:55:15.880927 tar[1451]: linux-amd64/README.md Jul 6 23:55:15.896108 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:55:16.185147 systemd-networkd[1380]: eth0: Gained IPv6LL Jul 6 23:55:16.190573 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:55:16.192994 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:55:16.204358 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:55:16.207354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:16.210531 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:55:16.232104 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:55:16.232414 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:55:16.234124 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:55:16.236454 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:55:17.742446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:17.744148 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:55:17.745474 systemd[1]: Startup finished in 1.159s (kernel) + 6.409s (initrd) + 5.673s (userspace) = 13.242s. Jul 6 23:55:17.758987 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:18.455415 kubelet[1547]: E0706 23:55:18.455340 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:18.460260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:18.460510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:18.460968 systemd[1]: kubelet.service: Consumed 2.154s CPU time. Jul 6 23:55:18.552228 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:55:18.553811 systemd[1]: Started sshd@0-10.0.0.101:22-10.0.0.1:39006.service - OpenSSH per-connection server daemon (10.0.0.1:39006). Jul 6 23:55:18.603113 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 39006 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:18.605694 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:18.618132 systemd-logind[1443]: New session 1 of user core. Jul 6 23:55:18.619657 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:55:18.630083 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:55:18.647739 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:55:18.651064 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:55:18.660095 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:55:18.816909 systemd[1565]: Queued start job for default target default.target. Jul 6 23:55:18.829103 systemd[1565]: Created slice app.slice - User Application Slice. Jul 6 23:55:18.829130 systemd[1565]: Reached target paths.target - Paths. Jul 6 23:55:18.829143 systemd[1565]: Reached target timers.target - Timers. Jul 6 23:55:18.830890 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:55:18.852879 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:55:18.853086 systemd[1565]: Reached target sockets.target - Sockets. Jul 6 23:55:18.853124 systemd[1565]: Reached target basic.target - Basic System. Jul 6 23:55:18.853172 systemd[1565]: Reached target default.target - Main User Target. Jul 6 23:55:18.853252 systemd[1565]: Startup finished in 186ms. Jul 6 23:55:18.853847 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:55:18.865846 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:55:18.927707 systemd[1]: Started sshd@1-10.0.0.101:22-10.0.0.1:39020.service - OpenSSH per-connection server daemon (10.0.0.1:39020). Jul 6 23:55:18.972001 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 39020 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:18.974073 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:18.978789 systemd-logind[1443]: New session 2 of user core. Jul 6 23:55:18.985854 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:55:19.043834 sshd[1576]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:19.054850 systemd[1]: sshd@1-10.0.0.101:22-10.0.0.1:39020.service: Deactivated successfully. Jul 6 23:55:19.056820 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:55:19.058662 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:55:19.060073 systemd[1]: Started sshd@2-10.0.0.101:22-10.0.0.1:39030.service - OpenSSH per-connection server daemon (10.0.0.1:39030). Jul 6 23:55:19.060910 systemd-logind[1443]: Removed session 2. Jul 6 23:55:19.101690 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 39030 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:19.103317 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:19.108325 systemd-logind[1443]: New session 3 of user core. Jul 6 23:55:19.122882 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:55:19.172946 sshd[1583]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:19.194250 systemd[1]: sshd@2-10.0.0.101:22-10.0.0.1:39030.service: Deactivated successfully. Jul 6 23:55:19.196407 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:55:19.198325 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:55:19.207142 systemd[1]: Started sshd@3-10.0.0.101:22-10.0.0.1:39034.service - OpenSSH per-connection server daemon (10.0.0.1:39034). Jul 6 23:55:19.208268 systemd-logind[1443]: Removed session 3. Jul 6 23:55:19.240162 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 39034 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:19.241816 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:19.245987 systemd-logind[1443]: New session 4 of user core. Jul 6 23:55:19.255918 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:55:19.311842 sshd[1590]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:19.322632 systemd[1]: sshd@3-10.0.0.101:22-10.0.0.1:39034.service: Deactivated successfully. Jul 6 23:55:19.324495 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:55:19.326283 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:55:19.327871 systemd[1]: Started sshd@4-10.0.0.101:22-10.0.0.1:39048.service - OpenSSH per-connection server daemon (10.0.0.1:39048). Jul 6 23:55:19.328831 systemd-logind[1443]: Removed session 4. Jul 6 23:55:19.365326 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 39048 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:19.367241 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:19.371592 systemd-logind[1443]: New session 5 of user core. Jul 6 23:55:19.379846 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:55:19.438239 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:55:19.438611 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:19.457166 sudo[1600]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:19.459562 sshd[1597]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:19.479147 systemd[1]: sshd@4-10.0.0.101:22-10.0.0.1:39048.service: Deactivated successfully. Jul 6 23:55:19.481486 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:55:19.483522 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:55:19.485131 systemd[1]: Started sshd@5-10.0.0.101:22-10.0.0.1:39052.service - OpenSSH per-connection server daemon (10.0.0.1:39052). Jul 6 23:55:19.486141 systemd-logind[1443]: Removed session 5. Jul 6 23:55:19.524118 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 39052 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:19.526098 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:19.530666 systemd-logind[1443]: New session 6 of user core. Jul 6 23:55:19.539951 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:55:19.594342 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:55:19.594699 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:19.599114 sudo[1609]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:19.606391 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:55:19.606783 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:19.626028 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:19.627759 auditctl[1612]: No rules Jul 6 23:55:19.628296 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:55:19.628626 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:19.631883 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:19.664673 augenrules[1630]: No rules Jul 6 23:55:19.666601 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:19.667967 sudo[1608]: pam_unix(sudo:session): session closed for user root Jul 6 23:55:19.669871 sshd[1605]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:19.681510 systemd[1]: sshd@5-10.0.0.101:22-10.0.0.1:39052.service: Deactivated successfully. Jul 6 23:55:19.683137 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:55:19.684826 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:55:19.686112 systemd[1]: Started sshd@6-10.0.0.101:22-10.0.0.1:39068.service - OpenSSH per-connection server daemon (10.0.0.1:39068). Jul 6 23:55:19.686819 systemd-logind[1443]: Removed session 6. Jul 6 23:55:19.725442 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 39068 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:19.727403 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:19.731849 systemd-logind[1443]: New session 7 of user core. Jul 6 23:55:19.742870 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:55:19.796625 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:55:19.797016 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:20.491070 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:55:20.491204 (dockerd)[1659]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:55:21.350986 dockerd[1659]: time="2025-07-06T23:55:21.350891241Z" level=info msg="Starting up" Jul 6 23:55:21.995964 systemd[1]: var-lib-docker-metacopy\x2dcheck1979236111-merged.mount: Deactivated successfully. Jul 6 23:55:22.023974 dockerd[1659]: time="2025-07-06T23:55:22.023907063Z" level=info msg="Loading containers: start." Jul 6 23:55:22.181739 kernel: Initializing XFRM netlink socket Jul 6 23:55:22.267360 systemd-networkd[1380]: docker0: Link UP Jul 6 23:55:22.497484 dockerd[1659]: time="2025-07-06T23:55:22.497415231Z" level=info msg="Loading containers: done." Jul 6 23:55:22.636041 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1805271751-merged.mount: Deactivated successfully. Jul 6 23:55:22.639437 dockerd[1659]: time="2025-07-06T23:55:22.639375281Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:55:22.639533 dockerd[1659]: time="2025-07-06T23:55:22.639507860Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:55:22.639687 dockerd[1659]: time="2025-07-06T23:55:22.639660393Z" level=info msg="Daemon has completed initialization" Jul 6 23:55:22.699770 dockerd[1659]: time="2025-07-06T23:55:22.699638191Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:55:22.699996 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:55:23.378809 containerd[1456]: time="2025-07-06T23:55:23.378752634Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 6 23:55:24.253905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211246522.mount: Deactivated successfully. Jul 6 23:55:25.802736 containerd[1456]: time="2025-07-06T23:55:25.802666346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:25.825821 containerd[1456]: time="2025-07-06T23:55:25.825770010Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 6 23:55:25.846277 containerd[1456]: time="2025-07-06T23:55:25.846217140Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:25.866790 containerd[1456]: time="2025-07-06T23:55:25.866751451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:25.867733 containerd[1456]: time="2025-07-06T23:55:25.867687222Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.488885516s" Jul 6 23:55:25.867802 containerd[1456]: time="2025-07-06T23:55:25.867743947Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 6 23:55:25.868431 containerd[1456]: time="2025-07-06T23:55:25.868263578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 6 23:55:28.117905 containerd[1456]: time="2025-07-06T23:55:28.117837080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:28.118757 containerd[1456]: time="2025-07-06T23:55:28.118661700Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 6 23:55:28.120058 containerd[1456]: time="2025-07-06T23:55:28.120026264Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:28.123542 containerd[1456]: time="2025-07-06T23:55:28.123506026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:28.125034 containerd[1456]: time="2025-07-06T23:55:28.124987688Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 2.256692321s" Jul 6 23:55:28.125077 containerd[1456]: time="2025-07-06T23:55:28.125031614Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 6 23:55:28.125492 containerd[1456]: time="2025-07-06T23:55:28.125470809Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 6 23:55:28.510899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:55:28.521931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:28.864457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:28.869610 (kubelet)[1876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:28.984117 kubelet[1876]: E0706 23:55:28.982422 1876 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:28.990411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:28.994927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:30.137585 containerd[1456]: time="2025-07-06T23:55:30.137494419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:30.138784 containerd[1456]: time="2025-07-06T23:55:30.138730368Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 6 23:55:30.140177 containerd[1456]: time="2025-07-06T23:55:30.140139844Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:30.143758 containerd[1456]: time="2025-07-06T23:55:30.143694922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:30.145648 containerd[1456]: time="2025-07-06T23:55:30.145583064Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 2.020077915s" Jul 6 23:55:30.145800 containerd[1456]: time="2025-07-06T23:55:30.145658247Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 6 23:55:30.146990 containerd[1456]: time="2025-07-06T23:55:30.146960693Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 6 23:55:31.908732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1233524383.mount: Deactivated successfully. Jul 6 23:55:32.590885 containerd[1456]: time="2025-07-06T23:55:32.590788858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:32.592200 containerd[1456]: time="2025-07-06T23:55:32.592158121Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 6 23:55:32.593662 containerd[1456]: time="2025-07-06T23:55:32.593630250Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:32.596015 containerd[1456]: time="2025-07-06T23:55:32.595980427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:32.596757 containerd[1456]: time="2025-07-06T23:55:32.596699549Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.449698907s" Jul 6 23:55:32.596820 containerd[1456]: time="2025-07-06T23:55:32.596762413Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 6 23:55:32.597609 containerd[1456]: time="2025-07-06T23:55:32.597555801Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:55:34.184583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605264577.mount: Deactivated successfully. Jul 6 23:55:36.317771 containerd[1456]: time="2025-07-06T23:55:36.317668980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:36.318616 containerd[1456]: time="2025-07-06T23:55:36.318558694Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:55:36.320109 containerd[1456]: time="2025-07-06T23:55:36.320061798Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:36.323202 containerd[1456]: time="2025-07-06T23:55:36.323163992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:36.324260 containerd[1456]: time="2025-07-06T23:55:36.324216298Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.726603138s" Jul 6 23:55:36.324325 containerd[1456]: time="2025-07-06T23:55:36.324265475Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:55:36.324870 containerd[1456]: time="2025-07-06T23:55:36.324824001Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:55:36.909155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989718454.mount: Deactivated successfully. Jul 6 23:55:36.917413 containerd[1456]: time="2025-07-06T23:55:36.917365735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:36.918302 containerd[1456]: time="2025-07-06T23:55:36.918223082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:55:36.919333 containerd[1456]: time="2025-07-06T23:55:36.919292226Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:36.922064 containerd[1456]: time="2025-07-06T23:55:36.922014394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:36.922699 containerd[1456]: time="2025-07-06T23:55:36.922669946Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 597.789443ms" Jul 6 23:55:36.922699 containerd[1456]: time="2025-07-06T23:55:36.922699836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:55:36.923417 containerd[1456]: time="2025-07-06T23:55:36.923387616Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 6 23:55:38.258129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount735131456.mount: Deactivated successfully. Jul 6 23:55:39.011108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:55:39.027978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:39.874168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:39.879538 (kubelet)[1972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:40.154806 kubelet[1972]: E0706 23:55:40.154568 1972 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:40.158829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:40.159063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:42.229086 containerd[1456]: time="2025-07-06T23:55:42.228991548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.231273 containerd[1456]: time="2025-07-06T23:55:42.231204685Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 6 23:55:42.236578 containerd[1456]: time="2025-07-06T23:55:42.236531691Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.244192 containerd[1456]: time="2025-07-06T23:55:42.244108740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.245579 containerd[1456]: time="2025-07-06T23:55:42.245539272Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.322118458s" Jul 6 23:55:42.245614 containerd[1456]: time="2025-07-06T23:55:42.245578701Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 6 23:55:45.262916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:45.279063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:45.308552 systemd[1]: Reloading requested from client PID 2052 ('systemctl') (unit session-7.scope)... Jul 6 23:55:45.308574 systemd[1]: Reloading... Jul 6 23:55:45.406160 zram_generator::config[2091]: No configuration found. Jul 6 23:55:45.749453 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:45.830794 systemd[1]: Reloading finished in 521 ms. Jul 6 23:55:45.882747 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:55:45.882850 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:55:45.883146 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:45.884904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:46.061543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:46.072137 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:55:46.122972 kubelet[2139]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:46.122972 kubelet[2139]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:55:46.122972 kubelet[2139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:46.123445 kubelet[2139]: I0706 23:55:46.123052 2139 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:55:46.375873 kubelet[2139]: I0706 23:55:46.375726 2139 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:55:46.375873 kubelet[2139]: I0706 23:55:46.375769 2139 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:55:46.376204 kubelet[2139]: I0706 23:55:46.376180 2139 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:55:46.401190 kubelet[2139]: E0706 23:55:46.401126 2139 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:46.401346 kubelet[2139]: I0706 23:55:46.401240 2139 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:55:46.406919 kubelet[2139]: E0706 23:55:46.406876 2139 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:55:46.406919 kubelet[2139]: I0706 23:55:46.406907 2139 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:55:46.413011 kubelet[2139]: I0706 23:55:46.412977 2139 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:55:46.414492 kubelet[2139]: I0706 23:55:46.414388 2139 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:55:46.414680 kubelet[2139]: I0706 23:55:46.414481 2139 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:55:46.414799 kubelet[2139]: I0706 23:55:46.414687 2139 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:55:46.414799 kubelet[2139]: I0706 23:55:46.414698 2139 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:55:46.414942 kubelet[2139]: I0706 23:55:46.414913 2139 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:46.418329 kubelet[2139]: I0706 23:55:46.418297 2139 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:55:46.418370 kubelet[2139]: I0706 23:55:46.418339 2139 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:55:46.418391 kubelet[2139]: I0706 23:55:46.418371 2139 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:55:46.418391 kubelet[2139]: I0706 23:55:46.418386 2139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:55:46.424154 kubelet[2139]: W0706 23:55:46.423990 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:46.424154 kubelet[2139]: E0706 23:55:46.424093 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:46.424706 kubelet[2139]: W0706 23:55:46.424613 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:46.424706 kubelet[2139]: E0706 23:55:46.424683 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:46.426057 kubelet[2139]: I0706 23:55:46.425137 2139 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:55:46.426057 kubelet[2139]: I0706 23:55:46.425814 2139 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:55:46.426682 kubelet[2139]: W0706 23:55:46.426595 2139 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:55:46.429158 kubelet[2139]: I0706 23:55:46.429128 2139 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:55:46.430258 kubelet[2139]: I0706 23:55:46.430183 2139 server.go:1287] "Started kubelet" Jul 6 23:55:46.430358 kubelet[2139]: I0706 23:55:46.430274 2139 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:55:46.431830 kubelet[2139]: I0706 23:55:46.431442 2139 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:55:46.433832 kubelet[2139]: I0706 23:55:46.432106 2139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:55:46.435253 kubelet[2139]: I0706 23:55:46.434558 2139 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:55:46.435253 kubelet[2139]: I0706 23:55:46.434916 2139 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:55:46.435508 kubelet[2139]: I0706 23:55:46.435319 2139 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:55:46.437216 kubelet[2139]: E0706 23:55:46.437179 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:46.437297 kubelet[2139]: I0706 23:55:46.437226 2139 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:55:46.437460 kubelet[2139]: I0706 23:55:46.437443 2139 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:55:46.437512 kubelet[2139]: I0706 23:55:46.437498 2139 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:55:46.437881 kubelet[2139]: W0706 23:55:46.437830 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:46.437935 kubelet[2139]: E0706 23:55:46.437891 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:46.438633 kubelet[2139]: E0706 23:55:46.438585 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="200ms" Jul 6 23:55:46.438976 kubelet[2139]: I0706 23:55:46.438822 2139 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:55:46.438976 kubelet[2139]: I0706 23:55:46.438920 2139 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:55:46.439579 kubelet[2139]: E0706 23:55:46.439556 2139 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:55:46.440088 kubelet[2139]: I0706 23:55:46.440069 2139 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:55:46.440816 kubelet[2139]: E0706 23:55:46.439680 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcec9db1b222d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:55:46.430149165 +0000 UTC m=+0.350216374,LastTimestamp:2025-07-06 23:55:46.430149165 +0000 UTC m=+0.350216374,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:55:46.493264 kubelet[2139]: I0706 23:55:46.493205 2139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:55:46.493517 kubelet[2139]: I0706 23:55:46.493442 2139 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:55:46.493517 kubelet[2139]: I0706 23:55:46.493459 2139 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:55:46.493517 kubelet[2139]: I0706 23:55:46.493484 2139 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:46.494907 kubelet[2139]: I0706 23:55:46.494875 2139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:55:46.495390 kubelet[2139]: I0706 23:55:46.494918 2139 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:55:46.495390 kubelet[2139]: I0706 23:55:46.494955 2139 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:55:46.495390 kubelet[2139]: I0706 23:55:46.494967 2139 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:55:46.495390 kubelet[2139]: E0706 23:55:46.495022 2139 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:55:46.496307 kubelet[2139]: W0706 23:55:46.496252 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:46.496349 kubelet[2139]: E0706 23:55:46.496320 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:46.538283 kubelet[2139]: E0706 23:55:46.538247 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:46.546958 kubelet[2139]: E0706 23:55:46.546828 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcec9db1b222d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:55:46.430149165 +0000 UTC m=+0.350216374,LastTimestamp:2025-07-06 23:55:46.430149165 +0000 UTC m=+0.350216374,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:55:46.595347 kubelet[2139]: E0706 23:55:46.595312 2139 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:55:46.638685 kubelet[2139]: E0706 23:55:46.638542 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:46.640245 kubelet[2139]: E0706 23:55:46.640202 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="400ms" Jul 6 23:55:46.739465 kubelet[2139]: E0706 23:55:46.739424 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:46.795688 kubelet[2139]: E0706 23:55:46.795602 2139 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:55:46.840055 kubelet[2139]: E0706 23:55:46.840003 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:46.940382 kubelet[2139]: E0706 23:55:46.940207 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.040903 kubelet[2139]: E0706 23:55:47.040834 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.041418 kubelet[2139]: E0706 23:55:47.041372 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="800ms" Jul 6 23:55:47.141958 kubelet[2139]: E0706 23:55:47.141869 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.196165 kubelet[2139]: E0706 23:55:47.195996 2139 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:55:47.232976 kubelet[2139]: W0706 23:55:47.232915 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:47.232976 kubelet[2139]: E0706 23:55:47.232981 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:47.242704 kubelet[2139]: E0706 23:55:47.242621 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.304421 kubelet[2139]: W0706 23:55:47.304358 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:47.304588 kubelet[2139]: E0706 23:55:47.304424 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:47.343294 kubelet[2139]: E0706 23:55:47.343254 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.444212 kubelet[2139]: E0706 23:55:47.444152 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.544902 kubelet[2139]: E0706 23:55:47.544865 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.645775 kubelet[2139]: E0706 23:55:47.645654 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.746371 kubelet[2139]: E0706 23:55:47.746299 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.811463 kubelet[2139]: W0706 23:55:47.811301 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:47.811463 kubelet[2139]: E0706 23:55:47.811360 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:47.842854 kubelet[2139]: E0706 23:55:47.842795 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="1.6s" Jul 6 23:55:47.846804 kubelet[2139]: E0706 23:55:47.846781 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.876753 kubelet[2139]: W0706 23:55:47.876662 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:47.876753 kubelet[2139]: E0706 23:55:47.876707 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:47.922109 kubelet[2139]: I0706 23:55:47.922043 2139 policy_none.go:49] "None policy: Start" Jul 6 23:55:47.922109 kubelet[2139]: I0706 23:55:47.922090 2139 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:55:47.922109 kubelet[2139]: I0706 23:55:47.922114 2139 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:55:47.947277 kubelet[2139]: E0706 23:55:47.947211 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:47.976402 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:55:47.992390 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:55:47.995638 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:55:47.996273 kubelet[2139]: E0706 23:55:47.996214 2139 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:55:48.005735 kubelet[2139]: I0706 23:55:48.005687 2139 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:55:48.005964 kubelet[2139]: I0706 23:55:48.005937 2139 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:55:48.005999 kubelet[2139]: I0706 23:55:48.005958 2139 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:55:48.006539 kubelet[2139]: I0706 23:55:48.006158 2139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:55:48.006944 kubelet[2139]: E0706 23:55:48.006922 2139 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:55:48.006999 kubelet[2139]: E0706 23:55:48.006965 2139 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:55:48.108574 kubelet[2139]: I0706 23:55:48.108439 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:48.109181 kubelet[2139]: E0706 23:55:48.109136 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jul 6 23:55:48.311114 kubelet[2139]: I0706 23:55:48.311079 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:48.311609 kubelet[2139]: E0706 23:55:48.311563 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jul 6 23:55:48.565877 kubelet[2139]: E0706 23:55:48.565834 2139 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:48.714224 kubelet[2139]: I0706 23:55:48.714184 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:48.714686 kubelet[2139]: E0706 23:55:48.714638 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jul 6 23:55:49.443934 kubelet[2139]: E0706 23:55:49.443870 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="3.2s" Jul 6 23:55:49.516777 kubelet[2139]: I0706 23:55:49.516742 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:49.517281 kubelet[2139]: E0706 23:55:49.517215 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jul 6 23:55:49.609623 systemd[1]: Created slice kubepods-burstable-pod8c4abdfd8ab372ac26726287c17a24b0.slice - libcontainer container kubepods-burstable-pod8c4abdfd8ab372ac26726287c17a24b0.slice. Jul 6 23:55:49.622192 kubelet[2139]: E0706 23:55:49.622137 2139 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:49.625542 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 6 23:55:49.627901 kubelet[2139]: E0706 23:55:49.627861 2139 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:49.641169 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 6 23:55:49.643214 kubelet[2139]: E0706 23:55:49.643184 2139 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:49.659730 kubelet[2139]: I0706 23:55:49.659628 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:49.659899 kubelet[2139]: I0706 23:55:49.659695 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c4abdfd8ab372ac26726287c17a24b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c4abdfd8ab372ac26726287c17a24b0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:49.659899 kubelet[2139]: I0706 23:55:49.659806 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c4abdfd8ab372ac26726287c17a24b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c4abdfd8ab372ac26726287c17a24b0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:49.659899 kubelet[2139]: I0706 23:55:49.659834 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c4abdfd8ab372ac26726287c17a24b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8c4abdfd8ab372ac26726287c17a24b0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:49.659899 kubelet[2139]: I0706 23:55:49.659856 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:49.659899 kubelet[2139]: I0706 23:55:49.659878 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:49.660049 kubelet[2139]: I0706 23:55:49.659899 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:49.660049 kubelet[2139]: I0706 23:55:49.659932 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:49.660049 kubelet[2139]: I0706 23:55:49.659967 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:49.760215 kubelet[2139]: W0706 23:55:49.760020 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:49.760215 kubelet[2139]: E0706 23:55:49.760115 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:49.923258 kubelet[2139]: E0706 23:55:49.923221 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:49.924092 containerd[1456]: time="2025-07-06T23:55:49.924048876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8c4abdfd8ab372ac26726287c17a24b0,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:49.928384 kubelet[2139]: E0706 23:55:49.928360 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:49.928800 containerd[1456]: time="2025-07-06T23:55:49.928765716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:49.944327 kubelet[2139]: E0706 23:55:49.944290 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:49.944677 containerd[1456]: time="2025-07-06T23:55:49.944648032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:50.194496 kubelet[2139]: W0706 23:55:50.194435 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:50.194637 kubelet[2139]: E0706 23:55:50.194507 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:50.378297 kubelet[2139]: W0706 23:55:50.378216 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:50.378411 kubelet[2139]: E0706 23:55:50.378300 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:50.430161 kubelet[2139]: W0706 23:55:50.430079 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.101:6443: connect: connection refused Jul 6 23:55:50.430232 kubelet[2139]: E0706 23:55:50.430163 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:55:50.469387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617921337.mount: Deactivated successfully. Jul 6 23:55:50.474656 containerd[1456]: time="2025-07-06T23:55:50.474588324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:50.476425 containerd[1456]: time="2025-07-06T23:55:50.476365574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:55:50.477320 containerd[1456]: time="2025-07-06T23:55:50.477285711Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:50.478204 containerd[1456]: time="2025-07-06T23:55:50.478164300Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:50.479148 containerd[1456]: time="2025-07-06T23:55:50.479101232Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:50.479945 containerd[1456]: time="2025-07-06T23:55:50.479903710Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:55:50.480948 containerd[1456]: time="2025-07-06T23:55:50.480913626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:55:50.482440 containerd[1456]: time="2025-07-06T23:55:50.482405466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:50.484365 containerd[1456]: time="2025-07-06T23:55:50.484334976Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 560.184854ms" Jul 6 23:55:50.485161 containerd[1456]: time="2025-07-06T23:55:50.485132284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 540.419535ms" Jul 6 23:55:50.487944 containerd[1456]: time="2025-07-06T23:55:50.487910341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.083436ms" Jul 6 23:55:50.819518 containerd[1456]: time="2025-07-06T23:55:50.819363731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:50.819518 containerd[1456]: time="2025-07-06T23:55:50.819414839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:50.819518 containerd[1456]: time="2025-07-06T23:55:50.819425130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:50.819518 containerd[1456]: time="2025-07-06T23:55:50.819514137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:50.820788 containerd[1456]: time="2025-07-06T23:55:50.820144524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:50.820788 containerd[1456]: time="2025-07-06T23:55:50.820254515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:50.820788 containerd[1456]: time="2025-07-06T23:55:50.820288006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:50.820788 containerd[1456]: time="2025-07-06T23:55:50.820399090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:50.830309 containerd[1456]: time="2025-07-06T23:55:50.827681456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:50.830309 containerd[1456]: time="2025-07-06T23:55:50.827753457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:50.830309 containerd[1456]: time="2025-07-06T23:55:50.827770252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:50.830309 containerd[1456]: time="2025-07-06T23:55:50.827896298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:50.847914 systemd[1]: Started cri-containerd-a5e871246d6dc1ba73d476d17f8a2292114ffeaeb9fb45762b45d1ea0fb98395.scope - libcontainer container a5e871246d6dc1ba73d476d17f8a2292114ffeaeb9fb45762b45d1ea0fb98395. Jul 6 23:55:50.855286 systemd[1]: Started cri-containerd-0cd694b9a7f18a7659eab30ffb5861bb0deeb5b78af79f83caad83aa710aaa75.scope - libcontainer container 0cd694b9a7f18a7659eab30ffb5861bb0deeb5b78af79f83caad83aa710aaa75. Jul 6 23:55:50.862583 systemd[1]: Started cri-containerd-8086155377c111be1137424bd3233fd8476390f0165c73f7c1c32ecb4ff6f072.scope - libcontainer container 8086155377c111be1137424bd3233fd8476390f0165c73f7c1c32ecb4ff6f072. Jul 6 23:55:50.915468 containerd[1456]: time="2025-07-06T23:55:50.915410976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5e871246d6dc1ba73d476d17f8a2292114ffeaeb9fb45762b45d1ea0fb98395\"" Jul 6 23:55:50.918515 kubelet[2139]: E0706 23:55:50.918486 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:50.921389 containerd[1456]: time="2025-07-06T23:55:50.921349104Z" level=info msg="CreateContainer within sandbox \"a5e871246d6dc1ba73d476d17f8a2292114ffeaeb9fb45762b45d1ea0fb98395\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:55:50.922777 containerd[1456]: time="2025-07-06T23:55:50.922673761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8086155377c111be1137424bd3233fd8476390f0165c73f7c1c32ecb4ff6f072\"" Jul 6 23:55:50.924413 kubelet[2139]: E0706 23:55:50.924377 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:50.926440 containerd[1456]: time="2025-07-06T23:55:50.926398129Z" level=info msg="CreateContainer within sandbox \"8086155377c111be1137424bd3233fd8476390f0165c73f7c1c32ecb4ff6f072\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:55:50.939964 containerd[1456]: time="2025-07-06T23:55:50.939872767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8c4abdfd8ab372ac26726287c17a24b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cd694b9a7f18a7659eab30ffb5861bb0deeb5b78af79f83caad83aa710aaa75\"" Jul 6 23:55:50.941108 kubelet[2139]: E0706 23:55:50.941059 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:50.943577 containerd[1456]: time="2025-07-06T23:55:50.943509711Z" level=info msg="CreateContainer within sandbox \"0cd694b9a7f18a7659eab30ffb5861bb0deeb5b78af79f83caad83aa710aaa75\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:55:50.949593 containerd[1456]: time="2025-07-06T23:55:50.949522085Z" level=info msg="CreateContainer within sandbox \"a5e871246d6dc1ba73d476d17f8a2292114ffeaeb9fb45762b45d1ea0fb98395\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c71fb190668ab371bf4dd3ba2f9513d6e943430547905c1a624bcb1ec04108f1\"" Jul 6 23:55:50.950913 containerd[1456]: time="2025-07-06T23:55:50.950867727Z" level=info msg="StartContainer for \"c71fb190668ab371bf4dd3ba2f9513d6e943430547905c1a624bcb1ec04108f1\"" Jul 6 23:55:50.951998 containerd[1456]: time="2025-07-06T23:55:50.951938050Z" level=info msg="CreateContainer within sandbox \"8086155377c111be1137424bd3233fd8476390f0165c73f7c1c32ecb4ff6f072\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"debc377a9d255fc376067a3df71d261d6212b5ff3410d829e10fe827eec98346\"" Jul 6 23:55:50.952352 containerd[1456]: time="2025-07-06T23:55:50.952321447Z" level=info msg="StartContainer for \"debc377a9d255fc376067a3df71d261d6212b5ff3410d829e10fe827eec98346\"" Jul 6 23:55:50.964854 containerd[1456]: time="2025-07-06T23:55:50.964801811Z" level=info msg="CreateContainer within sandbox \"0cd694b9a7f18a7659eab30ffb5861bb0deeb5b78af79f83caad83aa710aaa75\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ebda7a84367500fddf632ad55332da1b3f8eb03fe59a588dc92b2f804459d08b\"" Jul 6 23:55:50.965762 containerd[1456]: time="2025-07-06T23:55:50.965558915Z" level=info msg="StartContainer for \"ebda7a84367500fddf632ad55332da1b3f8eb03fe59a588dc92b2f804459d08b\"" Jul 6 23:55:50.988367 systemd[1]: Started cri-containerd-c71fb190668ab371bf4dd3ba2f9513d6e943430547905c1a624bcb1ec04108f1.scope - libcontainer container c71fb190668ab371bf4dd3ba2f9513d6e943430547905c1a624bcb1ec04108f1. Jul 6 23:55:50.992686 systemd[1]: Started cri-containerd-debc377a9d255fc376067a3df71d261d6212b5ff3410d829e10fe827eec98346.scope - libcontainer container debc377a9d255fc376067a3df71d261d6212b5ff3410d829e10fe827eec98346. Jul 6 23:55:51.019917 systemd[1]: Started cri-containerd-ebda7a84367500fddf632ad55332da1b3f8eb03fe59a588dc92b2f804459d08b.scope - libcontainer container ebda7a84367500fddf632ad55332da1b3f8eb03fe59a588dc92b2f804459d08b. Jul 6 23:55:51.055780 containerd[1456]: time="2025-07-06T23:55:51.055708728Z" level=info msg="StartContainer for \"c71fb190668ab371bf4dd3ba2f9513d6e943430547905c1a624bcb1ec04108f1\" returns successfully" Jul 6 23:55:51.083740 containerd[1456]: time="2025-07-06T23:55:51.083425405Z" level=info msg="StartContainer for \"debc377a9d255fc376067a3df71d261d6212b5ff3410d829e10fe827eec98346\" returns successfully" Jul 6 23:55:51.088280 containerd[1456]: time="2025-07-06T23:55:51.088137103Z" level=info msg="StartContainer for \"ebda7a84367500fddf632ad55332da1b3f8eb03fe59a588dc92b2f804459d08b\" returns successfully" Jul 6 23:55:51.121381 kubelet[2139]: I0706 23:55:51.121337 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:51.121886 kubelet[2139]: E0706 23:55:51.121848 2139 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Jul 6 23:55:51.513334 kubelet[2139]: E0706 23:55:51.513057 2139 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:51.513334 kubelet[2139]: E0706 23:55:51.513245 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:51.518143 kubelet[2139]: E0706 23:55:51.517915 2139 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:51.518586 kubelet[2139]: E0706 23:55:51.518272 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:51.519134 kubelet[2139]: E0706 23:55:51.518912 2139 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:51.519134 kubelet[2139]: E0706 23:55:51.519034 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:52.520212 kubelet[2139]: E0706 23:55:52.520139 2139 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:52.520212 kubelet[2139]: E0706 23:55:52.520213 2139 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:52.520887 kubelet[2139]: E0706 23:55:52.520312 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:52.520887 kubelet[2139]: E0706 23:55:52.520322 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:52.647537 kubelet[2139]: E0706 23:55:52.647485 2139 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:55:52.685592 kubelet[2139]: E0706 23:55:52.685550 2139 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 6 23:55:53.034843 kubelet[2139]: E0706 23:55:53.034804 2139 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 6 23:55:53.513589 kubelet[2139]: E0706 23:55:53.513555 2139 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 6 23:55:54.326939 kubelet[2139]: I0706 23:55:54.326897 2139 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:54.544182 kubelet[2139]: I0706 23:55:54.544135 2139 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:55:54.544182 kubelet[2139]: E0706 23:55:54.544176 2139 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:55:54.596557 kubelet[2139]: E0706 23:55:54.596435 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.696996 kubelet[2139]: E0706 23:55:54.696939 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.797638 kubelet[2139]: E0706 23:55:54.797600 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.810451 kubelet[2139]: E0706 23:55:54.810421 2139 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:54.810596 kubelet[2139]: E0706 23:55:54.810568 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:54.898046 kubelet[2139]: E0706 23:55:54.897954 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.998535 kubelet[2139]: E0706 23:55:54.998497 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:55.099374 kubelet[2139]: E0706 23:55:55.099333 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:55.200154 kubelet[2139]: E0706 23:55:55.199808 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:55.300858 kubelet[2139]: E0706 23:55:55.300801 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:55.401571 kubelet[2139]: E0706 23:55:55.401514 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:55.502744 kubelet[2139]: E0706 23:55:55.502616 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:55.603371 kubelet[2139]: E0706 23:55:55.603314 2139 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:55.738435 kubelet[2139]: I0706 23:55:55.738387 2139 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:55.745279 kubelet[2139]: I0706 23:55:55.745241 2139 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:55.749096 kubelet[2139]: I0706 23:55:55.749066 2139 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:56.017054 systemd[1]: Reloading requested from client PID 2416 ('systemctl') (unit session-7.scope)... Jul 6 23:55:56.017083 systemd[1]: Reloading... Jul 6 23:55:56.104762 zram_generator::config[2457]: No configuration found. Jul 6 23:55:56.355841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:56.427824 kubelet[2139]: I0706 23:55:56.427784 2139 apiserver.go:52] "Watching apiserver" Jul 6 23:55:56.429929 kubelet[2139]: E0706 23:55:56.429897 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:56.430039 kubelet[2139]: E0706 23:55:56.430011 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:56.430268 kubelet[2139]: E0706 23:55:56.430241 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:56.437964 kubelet[2139]: I0706 23:55:56.437932 2139 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:55:56.448522 systemd[1]: Reloading finished in 431 ms. Jul 6 23:55:56.497269 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:56.522309 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:55:56.522636 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:56.522696 systemd[1]: kubelet.service: Consumed 1.092s CPU time, 133.5M memory peak, 0B memory swap peak. Jul 6 23:55:56.536126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:56.712557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:56.719460 (kubelet)[2500]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:55:56.760884 kubelet[2500]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:56.760884 kubelet[2500]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:55:56.760884 kubelet[2500]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:56.761379 kubelet[2500]: I0706 23:55:56.760940 2500 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:55:56.768901 kubelet[2500]: I0706 23:55:56.768864 2500 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 6 23:55:56.768901 kubelet[2500]: I0706 23:55:56.768890 2500 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:55:56.769103 kubelet[2500]: I0706 23:55:56.769085 2500 server.go:954] "Client rotation is on, will bootstrap in background" Jul 6 23:55:56.770264 kubelet[2500]: I0706 23:55:56.770238 2500 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:55:56.772246 kubelet[2500]: I0706 23:55:56.772213 2500 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:55:56.774824 kubelet[2500]: E0706 23:55:56.774800 2500 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:55:56.774918 kubelet[2500]: I0706 23:55:56.774822 2500 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:55:56.780578 kubelet[2500]: I0706 23:55:56.780550 2500 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:55:56.780824 kubelet[2500]: I0706 23:55:56.780789 2500 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:55:56.780970 kubelet[2500]: I0706 23:55:56.780821 2500 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:55:56.781054 kubelet[2500]: I0706 23:55:56.780981 2500 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:55:56.781054 kubelet[2500]: I0706 23:55:56.780991 2500 container_manager_linux.go:304] "Creating device plugin manager" Jul 6 23:55:56.781054 kubelet[2500]: I0706 23:55:56.781041 2500 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:56.781216 kubelet[2500]: I0706 23:55:56.781199 2500 kubelet.go:446] "Attempting to sync node with API server" Jul 6 23:55:56.781245 kubelet[2500]: I0706 23:55:56.781235 2500 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:55:56.781281 kubelet[2500]: I0706 23:55:56.781266 2500 kubelet.go:352] "Adding apiserver pod source" Jul 6 23:55:56.781281 kubelet[2500]: I0706 23:55:56.781280 2500 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:55:56.782681 kubelet[2500]: I0706 23:55:56.782580 2500 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:55:56.782942 kubelet[2500]: I0706 23:55:56.782919 2500 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:55:56.783395 kubelet[2500]: I0706 23:55:56.783347 2500 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:55:56.783395 kubelet[2500]: I0706 23:55:56.783377 2500 server.go:1287] "Started kubelet" Jul 6 23:55:56.783676 kubelet[2500]: I0706 23:55:56.783645 2500 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:55:56.805482 kubelet[2500]: I0706 23:55:56.783686 2500 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:55:56.805482 kubelet[2500]: I0706 23:55:56.805313 2500 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:55:56.809297 kubelet[2500]: I0706 23:55:56.809275 2500 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:55:56.809501 kubelet[2500]: I0706 23:55:56.809381 2500 server.go:479] "Adding debug handlers to kubelet server" Jul 6 23:55:56.811205 kubelet[2500]: I0706 23:55:56.809521 2500 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:55:56.814097 kubelet[2500]: I0706 23:55:56.813610 2500 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:55:56.814097 kubelet[2500]: I0706 23:55:56.813872 2500 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:55:56.814555 kubelet[2500]: I0706 23:55:56.814526 2500 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:55:56.814788 kubelet[2500]: I0706 23:55:56.814533 2500 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:55:56.814950 kubelet[2500]: I0706 23:55:56.814899 2500 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:55:56.815875 kubelet[2500]: E0706 23:55:56.815853 2500 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:55:56.818148 kubelet[2500]: I0706 23:55:56.818122 2500 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:55:56.826021 kubelet[2500]: I0706 23:55:56.825986 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:55:56.828187 kubelet[2500]: I0706 23:55:56.827892 2500 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:55:56.828187 kubelet[2500]: I0706 23:55:56.827935 2500 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 6 23:55:56.828187 kubelet[2500]: I0706 23:55:56.827957 2500 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:55:56.828187 kubelet[2500]: I0706 23:55:56.827964 2500 kubelet.go:2382] "Starting kubelet main sync loop" Jul 6 23:55:56.828187 kubelet[2500]: E0706 23:55:56.828010 2500 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:55:56.857672 kubelet[2500]: I0706 23:55:56.857634 2500 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:55:56.857672 kubelet[2500]: I0706 23:55:56.857659 2500 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:55:56.857672 kubelet[2500]: I0706 23:55:56.857683 2500 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:56.857960 kubelet[2500]: I0706 23:55:56.857934 2500 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:55:56.857987 kubelet[2500]: I0706 23:55:56.857956 2500 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:55:56.857987 kubelet[2500]: I0706 23:55:56.857985 2500 policy_none.go:49] "None policy: Start" Jul 6 23:55:56.858034 kubelet[2500]: I0706 23:55:56.858002 2500 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:55:56.858034 kubelet[2500]: I0706 23:55:56.858021 2500 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:55:56.858182 kubelet[2500]: I0706 23:55:56.858161 2500 state_mem.go:75] "Updated machine memory state" Jul 6 23:55:56.862549 kubelet[2500]: I0706 23:55:56.862512 2500 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:55:56.862912 kubelet[2500]: I0706 23:55:56.862807 2500 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:55:56.862912 kubelet[2500]: I0706 23:55:56.862830 2500 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:55:56.863145 kubelet[2500]: I0706 23:55:56.863103 2500 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:55:56.864320 kubelet[2500]: E0706 23:55:56.864288 2500 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:55:56.929690 kubelet[2500]: I0706 23:55:56.929563 2500 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:56.929690 kubelet[2500]: I0706 23:55:56.929669 2500 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:56.929690 kubelet[2500]: I0706 23:55:56.929749 2500 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:56.973964 kubelet[2500]: I0706 23:55:56.973761 2500 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:56.975503 kubelet[2500]: E0706 23:55:56.975380 2500 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:56.975503 kubelet[2500]: E0706 23:55:56.975429 2500 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:56.975503 kubelet[2500]: E0706 23:55:56.975380 2500 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:56.981006 kubelet[2500]: I0706 23:55:56.980947 2500 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 6 23:55:56.981113 kubelet[2500]: I0706 23:55:56.981088 2500 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:55:57.015359 kubelet[2500]: I0706 23:55:57.015278 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:57.015359 kubelet[2500]: I0706 23:55:57.015335 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:57.015500 kubelet[2500]: I0706 23:55:57.015372 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c4abdfd8ab372ac26726287c17a24b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c4abdfd8ab372ac26726287c17a24b0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:57.015500 kubelet[2500]: I0706 23:55:57.015400 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c4abdfd8ab372ac26726287c17a24b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8c4abdfd8ab372ac26726287c17a24b0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:57.015500 kubelet[2500]: I0706 23:55:57.015429 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:57.015500 kubelet[2500]: I0706 23:55:57.015447 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:57.015500 kubelet[2500]: I0706 23:55:57.015466 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:57.015631 kubelet[2500]: I0706 23:55:57.015491 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:57.015631 kubelet[2500]: I0706 23:55:57.015512 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c4abdfd8ab372ac26726287c17a24b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8c4abdfd8ab372ac26726287c17a24b0\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:57.276505 kubelet[2500]: E0706 23:55:57.276404 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:57.276505 kubelet[2500]: E0706 23:55:57.276429 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:57.276505 kubelet[2500]: E0706 23:55:57.276404 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:57.782362 kubelet[2500]: I0706 23:55:57.782307 2500 apiserver.go:52] "Watching apiserver" Jul 6 23:55:57.814315 kubelet[2500]: I0706 23:55:57.814290 2500 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:55:57.838904 kubelet[2500]: I0706 23:55:57.838699 2500 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:57.838904 kubelet[2500]: E0706 23:55:57.838843 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:57.838904 kubelet[2500]: E0706 23:55:57.838904 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:57.843405 kubelet[2500]: E0706 23:55:57.843373 2500 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:57.844015 kubelet[2500]: E0706 23:55:57.843533 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:57.864830 kubelet[2500]: I0706 23:55:57.864726 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.864679041 podStartE2EDuration="2.864679041s" podCreationTimestamp="2025-07-06 23:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:55:57.858491185 +0000 UTC m=+1.132525233" watchObservedRunningTime="2025-07-06 23:55:57.864679041 +0000 UTC m=+1.138713089" Jul 6 23:55:57.864995 kubelet[2500]: I0706 23:55:57.864890 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.864883065 podStartE2EDuration="2.864883065s" podCreationTimestamp="2025-07-06 23:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:55:57.864826089 +0000 UTC m=+1.138860137" watchObservedRunningTime="2025-07-06 23:55:57.864883065 +0000 UTC m=+1.138917113" Jul 6 23:55:57.877582 kubelet[2500]: I0706 23:55:57.877509 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.877493603 podStartE2EDuration="2.877493603s" podCreationTimestamp="2025-07-06 23:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:55:57.870851625 +0000 UTC m=+1.144885683" watchObservedRunningTime="2025-07-06 23:55:57.877493603 +0000 UTC m=+1.151527651" Jul 6 23:55:58.839852 kubelet[2500]: E0706 23:55:58.839814 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:58.840458 kubelet[2500]: E0706 23:55:58.839814 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:00.506870 update_engine[1447]: I20250706 23:56:00.506772 1447 update_attempter.cc:509] Updating boot flags... Jul 6 23:56:00.821770 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2559) Jul 6 23:56:00.865805 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2559) Jul 6 23:56:00.907844 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2559) Jul 6 23:56:02.723095 kubelet[2500]: I0706 23:56:02.723060 2500 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:56:02.723742 kubelet[2500]: I0706 23:56:02.723562 2500 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:56:02.723792 containerd[1456]: time="2025-07-06T23:56:02.723401244Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:56:02.906190 kubelet[2500]: E0706 23:56:02.906120 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:03.408232 systemd[1]: Created slice kubepods-besteffort-pod878733c0_0744_40d0_a84e_a2473a83198f.slice - libcontainer container kubepods-besteffort-pod878733c0_0744_40d0_a84e_a2473a83198f.slice. Jul 6 23:56:03.449168 kubelet[2500]: I0706 23:56:03.449117 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/878733c0-0744-40d0-a84e-a2473a83198f-xtables-lock\") pod \"kube-proxy-wjrxs\" (UID: \"878733c0-0744-40d0-a84e-a2473a83198f\") " pod="kube-system/kube-proxy-wjrxs" Jul 6 23:56:03.449346 kubelet[2500]: I0706 23:56:03.449181 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msfqc\" (UniqueName: \"kubernetes.io/projected/878733c0-0744-40d0-a84e-a2473a83198f-kube-api-access-msfqc\") pod \"kube-proxy-wjrxs\" (UID: \"878733c0-0744-40d0-a84e-a2473a83198f\") " pod="kube-system/kube-proxy-wjrxs" Jul 6 23:56:03.449346 kubelet[2500]: I0706 23:56:03.449256 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/878733c0-0744-40d0-a84e-a2473a83198f-kube-proxy\") pod \"kube-proxy-wjrxs\" (UID: \"878733c0-0744-40d0-a84e-a2473a83198f\") " pod="kube-system/kube-proxy-wjrxs" Jul 6 23:56:03.449346 kubelet[2500]: I0706 23:56:03.449286 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/878733c0-0744-40d0-a84e-a2473a83198f-lib-modules\") pod \"kube-proxy-wjrxs\" (UID: \"878733c0-0744-40d0-a84e-a2473a83198f\") " pod="kube-system/kube-proxy-wjrxs" Jul 6 23:56:03.521922 systemd[1]: Created slice kubepods-besteffort-pod9e26e7a0_62d1_4c65_ac50_053809513c06.slice - libcontainer container kubepods-besteffort-pod9e26e7a0_62d1_4c65_ac50_053809513c06.slice. Jul 6 23:56:03.549935 kubelet[2500]: I0706 23:56:03.549790 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9e26e7a0-62d1-4c65-ac50-053809513c06-var-lib-calico\") pod \"tigera-operator-747864d56d-8bsp9\" (UID: \"9e26e7a0-62d1-4c65-ac50-053809513c06\") " pod="tigera-operator/tigera-operator-747864d56d-8bsp9" Jul 6 23:56:03.549935 kubelet[2500]: I0706 23:56:03.549842 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmj8c\" (UniqueName: \"kubernetes.io/projected/9e26e7a0-62d1-4c65-ac50-053809513c06-kube-api-access-lmj8c\") pod \"tigera-operator-747864d56d-8bsp9\" (UID: \"9e26e7a0-62d1-4c65-ac50-053809513c06\") " pod="tigera-operator/tigera-operator-747864d56d-8bsp9" Jul 6 23:56:03.725174 kubelet[2500]: E0706 23:56:03.725012 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:03.725797 containerd[1456]: time="2025-07-06T23:56:03.725762644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wjrxs,Uid:878733c0-0744-40d0-a84e-a2473a83198f,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:03.754090 containerd[1456]: time="2025-07-06T23:56:03.753920242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:03.754090 containerd[1456]: time="2025-07-06T23:56:03.754052725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:03.754282 containerd[1456]: time="2025-07-06T23:56:03.754071543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:03.754282 containerd[1456]: time="2025-07-06T23:56:03.754206812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:03.784944 systemd[1]: Started cri-containerd-60501881c841c6c83c821934649672d1ab3376c5f2e60652912c784a11a38aa8.scope - libcontainer container 60501881c841c6c83c821934649672d1ab3376c5f2e60652912c784a11a38aa8. Jul 6 23:56:03.811889 containerd[1456]: time="2025-07-06T23:56:03.811827807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wjrxs,Uid:878733c0-0744-40d0-a84e-a2473a83198f,Namespace:kube-system,Attempt:0,} returns sandbox id \"60501881c841c6c83c821934649672d1ab3376c5f2e60652912c784a11a38aa8\"" Jul 6 23:56:03.812767 kubelet[2500]: E0706 23:56:03.812735 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:03.815284 containerd[1456]: time="2025-07-06T23:56:03.815241477Z" level=info msg="CreateContainer within sandbox \"60501881c841c6c83c821934649672d1ab3376c5f2e60652912c784a11a38aa8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:56:03.825800 containerd[1456]: time="2025-07-06T23:56:03.825751257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8bsp9,Uid:9e26e7a0-62d1-4c65-ac50-053809513c06,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:56:03.839025 containerd[1456]: time="2025-07-06T23:56:03.838948761Z" level=info msg="CreateContainer within sandbox \"60501881c841c6c83c821934649672d1ab3376c5f2e60652912c784a11a38aa8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"adc5ac700e4aa71bb2d1402a1eb4cb6ca44256d547c08d1d741c0c62e5adb8f1\"" Jul 6 23:56:03.839782 containerd[1456]: time="2025-07-06T23:56:03.839699987Z" level=info msg="StartContainer for \"adc5ac700e4aa71bb2d1402a1eb4cb6ca44256d547c08d1d741c0c62e5adb8f1\"" Jul 6 23:56:03.852473 kubelet[2500]: E0706 23:56:03.852400 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:03.862599 containerd[1456]: time="2025-07-06T23:56:03.862462279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:03.862599 containerd[1456]: time="2025-07-06T23:56:03.862549433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:03.862956 containerd[1456]: time="2025-07-06T23:56:03.862617879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:03.864217 containerd[1456]: time="2025-07-06T23:56:03.863774332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:03.876949 systemd[1]: Started cri-containerd-adc5ac700e4aa71bb2d1402a1eb4cb6ca44256d547c08d1d741c0c62e5adb8f1.scope - libcontainer container adc5ac700e4aa71bb2d1402a1eb4cb6ca44256d547c08d1d741c0c62e5adb8f1. Jul 6 23:56:03.889310 systemd[1]: Started cri-containerd-209130c122e72f279585db52970807154c51c2f95b4a1eb792274ad5191b257b.scope - libcontainer container 209130c122e72f279585db52970807154c51c2f95b4a1eb792274ad5191b257b. Jul 6 23:56:03.917903 containerd[1456]: time="2025-07-06T23:56:03.917845108Z" level=info msg="StartContainer for \"adc5ac700e4aa71bb2d1402a1eb4cb6ca44256d547c08d1d741c0c62e5adb8f1\" returns successfully" Jul 6 23:56:03.936447 containerd[1456]: time="2025-07-06T23:56:03.936202137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8bsp9,Uid:9e26e7a0-62d1-4c65-ac50-053809513c06,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"209130c122e72f279585db52970807154c51c2f95b4a1eb792274ad5191b257b\"" Jul 6 23:56:03.938551 containerd[1456]: time="2025-07-06T23:56:03.938426286Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:56:04.855454 kubelet[2500]: E0706 23:56:04.855411 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:04.864837 kubelet[2500]: I0706 23:56:04.864742 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wjrxs" podStartSLOduration=1.864697049 podStartE2EDuration="1.864697049s" podCreationTimestamp="2025-07-06 23:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:04.8641641 +0000 UTC m=+8.138198148" watchObservedRunningTime="2025-07-06 23:56:04.864697049 +0000 UTC m=+8.138731097" Jul 6 23:56:05.431035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1173895558.mount: Deactivated successfully. Jul 6 23:56:05.763880 containerd[1456]: time="2025-07-06T23:56:05.763815021Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:05.764736 containerd[1456]: time="2025-07-06T23:56:05.764671968Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:56:05.766019 containerd[1456]: time="2025-07-06T23:56:05.765991693Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:05.770695 containerd[1456]: time="2025-07-06T23:56:05.770625853Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:05.771359 containerd[1456]: time="2025-07-06T23:56:05.771329518Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.832774255s" Jul 6 23:56:05.771411 containerd[1456]: time="2025-07-06T23:56:05.771361751Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:56:05.776044 containerd[1456]: time="2025-07-06T23:56:05.775996975Z" level=info msg="CreateContainer within sandbox \"209130c122e72f279585db52970807154c51c2f95b4a1eb792274ad5191b257b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:56:05.787337 containerd[1456]: time="2025-07-06T23:56:05.787296991Z" level=info msg="CreateContainer within sandbox \"209130c122e72f279585db52970807154c51c2f95b4a1eb792274ad5191b257b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"92273aadd30c041f0d86d7965f6b79bbe3293e8bc536f0a988e64a116bca48ca\"" Jul 6 23:56:05.787871 containerd[1456]: time="2025-07-06T23:56:05.787834817Z" level=info msg="StartContainer for \"92273aadd30c041f0d86d7965f6b79bbe3293e8bc536f0a988e64a116bca48ca\"" Jul 6 23:56:05.830002 systemd[1]: Started cri-containerd-92273aadd30c041f0d86d7965f6b79bbe3293e8bc536f0a988e64a116bca48ca.scope - libcontainer container 92273aadd30c041f0d86d7965f6b79bbe3293e8bc536f0a988e64a116bca48ca. Jul 6 23:56:05.957555 containerd[1456]: time="2025-07-06T23:56:05.957443747Z" level=info msg="StartContainer for \"92273aadd30c041f0d86d7965f6b79bbe3293e8bc536f0a988e64a116bca48ca\" returns successfully" Jul 6 23:56:05.960792 kubelet[2500]: E0706 23:56:05.960681 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:05.970213 kubelet[2500]: I0706 23:56:05.970107 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-8bsp9" podStartSLOduration=1.135094024 podStartE2EDuration="2.969546013s" podCreationTimestamp="2025-07-06 23:56:03 +0000 UTC" firstStartedPulling="2025-07-06 23:56:03.937648617 +0000 UTC m=+7.211682665" lastFinishedPulling="2025-07-06 23:56:05.772100606 +0000 UTC m=+9.046134654" observedRunningTime="2025-07-06 23:56:05.969310356 +0000 UTC m=+9.243344414" watchObservedRunningTime="2025-07-06 23:56:05.969546013 +0000 UTC m=+9.243580071" Jul 6 23:56:06.042443 kubelet[2500]: E0706 23:56:06.042275 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:06.194587 kubelet[2500]: E0706 23:56:06.194542 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:06.965176 kubelet[2500]: E0706 23:56:06.963198 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:06.965176 kubelet[2500]: E0706 23:56:06.963839 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:07.967750 kubelet[2500]: E0706 23:56:07.967692 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:11.334916 sudo[1641]: pam_unix(sudo:session): session closed for user root Jul 6 23:56:11.337751 sshd[1638]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:11.342782 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:56:11.343264 systemd[1]: sshd@6-10.0.0.101:22-10.0.0.1:39068.service: Deactivated successfully. Jul 6 23:56:11.346303 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:56:11.346659 systemd[1]: session-7.scope: Consumed 5.557s CPU time, 161.4M memory peak, 0B memory swap peak. Jul 6 23:56:11.351027 systemd-logind[1443]: Removed session 7. Jul 6 23:56:13.694853 systemd[1]: Created slice kubepods-besteffort-pod23b51ab5_b93f_4b37_ba4e_a0d20a04e7b6.slice - libcontainer container kubepods-besteffort-pod23b51ab5_b93f_4b37_ba4e_a0d20a04e7b6.slice. Jul 6 23:56:13.713603 kubelet[2500]: I0706 23:56:13.713445 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23b51ab5-b93f-4b37-ba4e-a0d20a04e7b6-tigera-ca-bundle\") pod \"calico-typha-69f99fc9bf-7ch49\" (UID: \"23b51ab5-b93f-4b37-ba4e-a0d20a04e7b6\") " pod="calico-system/calico-typha-69f99fc9bf-7ch49" Jul 6 23:56:13.713603 kubelet[2500]: I0706 23:56:13.713486 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/23b51ab5-b93f-4b37-ba4e-a0d20a04e7b6-typha-certs\") pod \"calico-typha-69f99fc9bf-7ch49\" (UID: \"23b51ab5-b93f-4b37-ba4e-a0d20a04e7b6\") " pod="calico-system/calico-typha-69f99fc9bf-7ch49" Jul 6 23:56:13.713603 kubelet[2500]: I0706 23:56:13.713505 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j97pt\" (UniqueName: \"kubernetes.io/projected/23b51ab5-b93f-4b37-ba4e-a0d20a04e7b6-kube-api-access-j97pt\") pod \"calico-typha-69f99fc9bf-7ch49\" (UID: \"23b51ab5-b93f-4b37-ba4e-a0d20a04e7b6\") " pod="calico-system/calico-typha-69f99fc9bf-7ch49" Jul 6 23:56:14.002133 kubelet[2500]: E0706 23:56:14.001935 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:14.002757 containerd[1456]: time="2025-07-06T23:56:14.002683693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69f99fc9bf-7ch49,Uid:23b51ab5-b93f-4b37-ba4e-a0d20a04e7b6,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:14.456650 containerd[1456]: time="2025-07-06T23:56:14.456380322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:14.457032 containerd[1456]: time="2025-07-06T23:56:14.456782927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:14.457032 containerd[1456]: time="2025-07-06T23:56:14.456933670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:14.457549 containerd[1456]: time="2025-07-06T23:56:14.457499001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:14.476539 systemd[1]: Created slice kubepods-besteffort-pod1d11f27b_75f3_47e9_8a28_699663505841.slice - libcontainer container kubepods-besteffort-pod1d11f27b_75f3_47e9_8a28_699663505841.slice. Jul 6 23:56:14.494906 systemd[1]: Started cri-containerd-83d35e8ed76da627c554fe1362e0fcc82de61d9504cad46067f5e30a5f0d3d90.scope - libcontainer container 83d35e8ed76da627c554fe1362e0fcc82de61d9504cad46067f5e30a5f0d3d90. Jul 6 23:56:14.518593 kubelet[2500]: I0706 23:56:14.518539 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1d11f27b-75f3-47e9-8a28-699663505841-cni-bin-dir\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518593 kubelet[2500]: I0706 23:56:14.518582 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1d11f27b-75f3-47e9-8a28-699663505841-var-run-calico\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518593 kubelet[2500]: I0706 23:56:14.518607 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d11f27b-75f3-47e9-8a28-699663505841-xtables-lock\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518834 kubelet[2500]: I0706 23:56:14.518623 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1d11f27b-75f3-47e9-8a28-699663505841-cni-log-dir\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518834 kubelet[2500]: I0706 23:56:14.518650 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7t6p\" (UniqueName: \"kubernetes.io/projected/1d11f27b-75f3-47e9-8a28-699663505841-kube-api-access-j7t6p\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518834 kubelet[2500]: I0706 23:56:14.518697 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1d11f27b-75f3-47e9-8a28-699663505841-cni-net-dir\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518834 kubelet[2500]: I0706 23:56:14.518728 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d11f27b-75f3-47e9-8a28-699663505841-var-lib-calico\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518834 kubelet[2500]: I0706 23:56:14.518746 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1d11f27b-75f3-47e9-8a28-699663505841-policysync\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518952 kubelet[2500]: I0706 23:56:14.518763 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d11f27b-75f3-47e9-8a28-699663505841-tigera-ca-bundle\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518952 kubelet[2500]: I0706 23:56:14.518780 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1d11f27b-75f3-47e9-8a28-699663505841-flexvol-driver-host\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518952 kubelet[2500]: I0706 23:56:14.518794 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d11f27b-75f3-47e9-8a28-699663505841-lib-modules\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.518952 kubelet[2500]: I0706 23:56:14.518811 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1d11f27b-75f3-47e9-8a28-699663505841-node-certs\") pod \"calico-node-4r4g2\" (UID: \"1d11f27b-75f3-47e9-8a28-699663505841\") " pod="calico-system/calico-node-4r4g2" Jul 6 23:56:14.536171 containerd[1456]: time="2025-07-06T23:56:14.536121595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69f99fc9bf-7ch49,Uid:23b51ab5-b93f-4b37-ba4e-a0d20a04e7b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"83d35e8ed76da627c554fe1362e0fcc82de61d9504cad46067f5e30a5f0d3d90\"" Jul 6 23:56:14.536790 kubelet[2500]: E0706 23:56:14.536752 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:14.537616 containerd[1456]: time="2025-07-06T23:56:14.537588443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:56:14.573215 kubelet[2500]: E0706 23:56:14.573153 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jzdf" podUID="573e83ed-8e01-4333-9a22-d115fe0e7655" Jul 6 23:56:14.619112 kubelet[2500]: I0706 23:56:14.618971 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/573e83ed-8e01-4333-9a22-d115fe0e7655-kubelet-dir\") pod \"csi-node-driver-8jzdf\" (UID: \"573e83ed-8e01-4333-9a22-d115fe0e7655\") " pod="calico-system/csi-node-driver-8jzdf" Jul 6 23:56:14.619112 kubelet[2500]: I0706 23:56:14.619027 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptt25\" (UniqueName: \"kubernetes.io/projected/573e83ed-8e01-4333-9a22-d115fe0e7655-kube-api-access-ptt25\") pod \"csi-node-driver-8jzdf\" (UID: \"573e83ed-8e01-4333-9a22-d115fe0e7655\") " pod="calico-system/csi-node-driver-8jzdf" Jul 6 23:56:14.619317 kubelet[2500]: I0706 23:56:14.619207 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/573e83ed-8e01-4333-9a22-d115fe0e7655-socket-dir\") pod \"csi-node-driver-8jzdf\" (UID: \"573e83ed-8e01-4333-9a22-d115fe0e7655\") " pod="calico-system/csi-node-driver-8jzdf" Jul 6 23:56:14.619378 kubelet[2500]: I0706 23:56:14.619312 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/573e83ed-8e01-4333-9a22-d115fe0e7655-registration-dir\") pod \"csi-node-driver-8jzdf\" (UID: \"573e83ed-8e01-4333-9a22-d115fe0e7655\") " pod="calico-system/csi-node-driver-8jzdf" Jul 6 23:56:14.619644 kubelet[2500]: I0706 23:56:14.619605 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/573e83ed-8e01-4333-9a22-d115fe0e7655-varrun\") pod \"csi-node-driver-8jzdf\" (UID: \"573e83ed-8e01-4333-9a22-d115fe0e7655\") " pod="calico-system/csi-node-driver-8jzdf" Jul 6 23:56:14.621534 kubelet[2500]: E0706 23:56:14.621504 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.621534 kubelet[2500]: W0706 23:56:14.621524 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.621639 kubelet[2500]: E0706 23:56:14.621559 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.623849 kubelet[2500]: E0706 23:56:14.623832 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.623849 kubelet[2500]: W0706 23:56:14.623846 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.623925 kubelet[2500]: E0706 23:56:14.623858 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.626810 kubelet[2500]: E0706 23:56:14.626794 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.626810 kubelet[2500]: W0706 23:56:14.626807 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.626888 kubelet[2500]: E0706 23:56:14.626818 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.721306 kubelet[2500]: E0706 23:56:14.721161 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.721306 kubelet[2500]: W0706 23:56:14.721187 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.721306 kubelet[2500]: E0706 23:56:14.721211 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.722402 kubelet[2500]: E0706 23:56:14.722361 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.722402 kubelet[2500]: W0706 23:56:14.722379 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.722402 kubelet[2500]: E0706 23:56:14.722414 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.722740 kubelet[2500]: E0706 23:56:14.722722 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.722740 kubelet[2500]: W0706 23:56:14.722736 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.722930 kubelet[2500]: E0706 23:56:14.722899 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.723106 kubelet[2500]: E0706 23:56:14.723088 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.723152 kubelet[2500]: W0706 23:56:14.723106 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.723152 kubelet[2500]: E0706 23:56:14.723137 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.723368 kubelet[2500]: E0706 23:56:14.723349 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.723368 kubelet[2500]: W0706 23:56:14.723365 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.723427 kubelet[2500]: E0706 23:56:14.723383 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.723660 kubelet[2500]: E0706 23:56:14.723640 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.723660 kubelet[2500]: W0706 23:56:14.723657 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.723740 kubelet[2500]: E0706 23:56:14.723675 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.724015 kubelet[2500]: E0706 23:56:14.723995 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.724015 kubelet[2500]: W0706 23:56:14.724012 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.724074 kubelet[2500]: E0706 23:56:14.724034 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.724345 kubelet[2500]: E0706 23:56:14.724327 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.724345 kubelet[2500]: W0706 23:56:14.724343 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.724403 kubelet[2500]: E0706 23:56:14.724383 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.724822 kubelet[2500]: E0706 23:56:14.724626 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.724822 kubelet[2500]: W0706 23:56:14.724655 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.724822 kubelet[2500]: E0706 23:56:14.724738 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.725016 kubelet[2500]: E0706 23:56:14.724994 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.725016 kubelet[2500]: W0706 23:56:14.725006 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.725152 kubelet[2500]: E0706 23:56:14.725038 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.725214 kubelet[2500]: E0706 23:56:14.725193 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.725214 kubelet[2500]: W0706 23:56:14.725208 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.725319 kubelet[2500]: E0706 23:56:14.725251 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.725493 kubelet[2500]: E0706 23:56:14.725475 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.725493 kubelet[2500]: W0706 23:56:14.725491 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.725574 kubelet[2500]: E0706 23:56:14.725514 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.725797 kubelet[2500]: E0706 23:56:14.725759 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.725797 kubelet[2500]: W0706 23:56:14.725776 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.725797 kubelet[2500]: E0706 23:56:14.725793 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.726066 kubelet[2500]: E0706 23:56:14.726033 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.726066 kubelet[2500]: W0706 23:56:14.726052 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.726178 kubelet[2500]: E0706 23:56:14.726073 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.726539 kubelet[2500]: E0706 23:56:14.726381 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.726539 kubelet[2500]: W0706 23:56:14.726401 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.726855 kubelet[2500]: E0706 23:56:14.726829 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.727706 kubelet[2500]: E0706 23:56:14.727435 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.727706 kubelet[2500]: W0706 23:56:14.727454 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.727706 kubelet[2500]: E0706 23:56:14.727534 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.727838 kubelet[2500]: E0706 23:56:14.727757 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.727838 kubelet[2500]: W0706 23:56:14.727767 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.727914 kubelet[2500]: E0706 23:56:14.727873 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.728061 kubelet[2500]: E0706 23:56:14.728043 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.728101 kubelet[2500]: W0706 23:56:14.728059 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.728124 kubelet[2500]: E0706 23:56:14.728100 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.728335 kubelet[2500]: E0706 23:56:14.728316 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.728335 kubelet[2500]: W0706 23:56:14.728333 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.728395 kubelet[2500]: E0706 23:56:14.728360 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.728573 kubelet[2500]: E0706 23:56:14.728552 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.728573 kubelet[2500]: W0706 23:56:14.728569 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.728632 kubelet[2500]: E0706 23:56:14.728586 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.728966 kubelet[2500]: E0706 23:56:14.728947 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.728966 kubelet[2500]: W0706 23:56:14.728961 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.729030 kubelet[2500]: E0706 23:56:14.728976 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.729275 kubelet[2500]: E0706 23:56:14.729238 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.729275 kubelet[2500]: W0706 23:56:14.729270 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.729362 kubelet[2500]: E0706 23:56:14.729284 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.729572 kubelet[2500]: E0706 23:56:14.729549 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.729572 kubelet[2500]: W0706 23:56:14.729566 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.729676 kubelet[2500]: E0706 23:56:14.729588 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.729959 kubelet[2500]: E0706 23:56:14.729942 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.729959 kubelet[2500]: W0706 23:56:14.729955 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.730045 kubelet[2500]: E0706 23:56:14.729970 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.730255 kubelet[2500]: E0706 23:56:14.730237 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.730292 kubelet[2500]: W0706 23:56:14.730255 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.730292 kubelet[2500]: E0706 23:56:14.730269 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.733480 kubelet[2500]: E0706 23:56:14.733457 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:14.733480 kubelet[2500]: W0706 23:56:14.733472 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:14.733625 kubelet[2500]: E0706 23:56:14.733487 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:14.779638 containerd[1456]: time="2025-07-06T23:56:14.779553632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4r4g2,Uid:1d11f27b-75f3-47e9-8a28-699663505841,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:14.807412 containerd[1456]: time="2025-07-06T23:56:14.807303524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:14.807412 containerd[1456]: time="2025-07-06T23:56:14.807365856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:14.807412 containerd[1456]: time="2025-07-06T23:56:14.807380254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:14.807620 containerd[1456]: time="2025-07-06T23:56:14.807477584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:14.827881 systemd[1]: Started cri-containerd-2284be865e3339dbd61747c76841538a3ea100266d9e3b73bdd3146c5cfb0e9c.scope - libcontainer container 2284be865e3339dbd61747c76841538a3ea100266d9e3b73bdd3146c5cfb0e9c. Jul 6 23:56:14.855314 containerd[1456]: time="2025-07-06T23:56:14.855249535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4r4g2,Uid:1d11f27b-75f3-47e9-8a28-699663505841,Namespace:calico-system,Attempt:0,} returns sandbox id \"2284be865e3339dbd61747c76841538a3ea100266d9e3b73bdd3146c5cfb0e9c\"" Jul 6 23:56:15.828912 kubelet[2500]: E0706 23:56:15.828673 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jzdf" podUID="573e83ed-8e01-4333-9a22-d115fe0e7655" Jul 6 23:56:16.079012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486788800.mount: Deactivated successfully. Jul 6 23:56:17.222666 containerd[1456]: time="2025-07-06T23:56:17.222611280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:17.223572 containerd[1456]: time="2025-07-06T23:56:17.223495326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:56:17.224913 containerd[1456]: time="2025-07-06T23:56:17.224877177Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:17.227736 containerd[1456]: time="2025-07-06T23:56:17.227686098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:17.228890 containerd[1456]: time="2025-07-06T23:56:17.228832111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.691173302s" Jul 6 23:56:17.228990 containerd[1456]: time="2025-07-06T23:56:17.228895294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:56:17.233481 containerd[1456]: time="2025-07-06T23:56:17.233452215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:56:17.264745 containerd[1456]: time="2025-07-06T23:56:17.264691282Z" level=info msg="CreateContainer within sandbox \"83d35e8ed76da627c554fe1362e0fcc82de61d9504cad46067f5e30a5f0d3d90\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:56:17.282320 containerd[1456]: time="2025-07-06T23:56:17.282289400Z" level=info msg="CreateContainer within sandbox \"83d35e8ed76da627c554fe1362e0fcc82de61d9504cad46067f5e30a5f0d3d90\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9ce2eff34b1c58cf05a324130d6111c802983e0579e348196a4601b6a549ebb5\"" Jul 6 23:56:17.283045 containerd[1456]: time="2025-07-06T23:56:17.283020197Z" level=info msg="StartContainer for \"9ce2eff34b1c58cf05a324130d6111c802983e0579e348196a4601b6a549ebb5\"" Jul 6 23:56:17.324860 systemd[1]: Started cri-containerd-9ce2eff34b1c58cf05a324130d6111c802983e0579e348196a4601b6a549ebb5.scope - libcontainer container 9ce2eff34b1c58cf05a324130d6111c802983e0579e348196a4601b6a549ebb5. Jul 6 23:56:17.403487 containerd[1456]: time="2025-07-06T23:56:17.403375715Z" level=info msg="StartContainer for \"9ce2eff34b1c58cf05a324130d6111c802983e0579e348196a4601b6a549ebb5\" returns successfully" Jul 6 23:56:17.830429 kubelet[2500]: E0706 23:56:17.830340 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jzdf" podUID="573e83ed-8e01-4333-9a22-d115fe0e7655" Jul 6 23:56:18.003211 kubelet[2500]: E0706 23:56:18.003154 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:18.021663 kubelet[2500]: I0706 23:56:18.021594 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69f99fc9bf-7ch49" podStartSLOduration=2.325592857 podStartE2EDuration="5.021567518s" podCreationTimestamp="2025-07-06 23:56:13 +0000 UTC" firstStartedPulling="2025-07-06 23:56:14.537304399 +0000 UTC m=+17.811338447" lastFinishedPulling="2025-07-06 23:56:17.23327906 +0000 UTC m=+20.507313108" observedRunningTime="2025-07-06 23:56:18.019187925 +0000 UTC m=+21.293221993" watchObservedRunningTime="2025-07-06 23:56:18.021567518 +0000 UTC m=+21.295601566" Jul 6 23:56:18.038359 kubelet[2500]: E0706 23:56:18.038264 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.038359 kubelet[2500]: W0706 23:56:18.038321 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.039156 kubelet[2500]: E0706 23:56:18.039123 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.039743 kubelet[2500]: E0706 23:56:18.039680 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.039814 kubelet[2500]: W0706 23:56:18.039734 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.039814 kubelet[2500]: E0706 23:56:18.039774 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.040214 kubelet[2500]: E0706 23:56:18.040188 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.040214 kubelet[2500]: W0706 23:56:18.040204 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.040214 kubelet[2500]: E0706 23:56:18.040216 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.040505 kubelet[2500]: E0706 23:56:18.040487 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.040505 kubelet[2500]: W0706 23:56:18.040501 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.040573 kubelet[2500]: E0706 23:56:18.040514 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.040848 kubelet[2500]: E0706 23:56:18.040818 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.040848 kubelet[2500]: W0706 23:56:18.040831 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.040954 kubelet[2500]: E0706 23:56:18.040860 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.041125 kubelet[2500]: E0706 23:56:18.041107 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.041125 kubelet[2500]: W0706 23:56:18.041120 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.041263 kubelet[2500]: E0706 23:56:18.041131 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.041555 kubelet[2500]: E0706 23:56:18.041537 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.041555 kubelet[2500]: W0706 23:56:18.041549 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.041555 kubelet[2500]: E0706 23:56:18.041560 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.041869 kubelet[2500]: E0706 23:56:18.041851 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.041869 kubelet[2500]: W0706 23:56:18.041863 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.041949 kubelet[2500]: E0706 23:56:18.041873 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.042305 kubelet[2500]: E0706 23:56:18.042288 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.042305 kubelet[2500]: W0706 23:56:18.042299 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.042305 kubelet[2500]: E0706 23:56:18.042308 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.042545 kubelet[2500]: E0706 23:56:18.042529 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.042545 kubelet[2500]: W0706 23:56:18.042539 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.042545 kubelet[2500]: E0706 23:56:18.042548 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.042907 kubelet[2500]: E0706 23:56:18.042866 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.042907 kubelet[2500]: W0706 23:56:18.042895 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.042999 kubelet[2500]: E0706 23:56:18.042927 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.043256 kubelet[2500]: E0706 23:56:18.043238 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.043256 kubelet[2500]: W0706 23:56:18.043252 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.043347 kubelet[2500]: E0706 23:56:18.043265 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.043595 kubelet[2500]: E0706 23:56:18.043577 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.043595 kubelet[2500]: W0706 23:56:18.043589 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.043684 kubelet[2500]: E0706 23:56:18.043602 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.043985 kubelet[2500]: E0706 23:56:18.043947 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.043985 kubelet[2500]: W0706 23:56:18.043964 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.043985 kubelet[2500]: E0706 23:56:18.043979 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.044278 kubelet[2500]: E0706 23:56:18.044261 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.044278 kubelet[2500]: W0706 23:56:18.044274 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.044336 kubelet[2500]: E0706 23:56:18.044293 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.054849 kubelet[2500]: E0706 23:56:18.054818 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.054849 kubelet[2500]: W0706 23:56:18.054835 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.054849 kubelet[2500]: E0706 23:56:18.054857 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.055176 kubelet[2500]: E0706 23:56:18.055154 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.055176 kubelet[2500]: W0706 23:56:18.055171 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.055249 kubelet[2500]: E0706 23:56:18.055191 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.055569 kubelet[2500]: E0706 23:56:18.055531 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.055603 kubelet[2500]: W0706 23:56:18.055567 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.055631 kubelet[2500]: E0706 23:56:18.055605 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.055976 kubelet[2500]: E0706 23:56:18.055954 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.055976 kubelet[2500]: W0706 23:56:18.055970 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.056055 kubelet[2500]: E0706 23:56:18.055991 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.056254 kubelet[2500]: E0706 23:56:18.056233 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.056254 kubelet[2500]: W0706 23:56:18.056247 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.056314 kubelet[2500]: E0706 23:56:18.056266 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.056612 kubelet[2500]: E0706 23:56:18.056551 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.056612 kubelet[2500]: W0706 23:56:18.056570 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.056612 kubelet[2500]: E0706 23:56:18.056591 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.056957 kubelet[2500]: E0706 23:56:18.056933 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.057015 kubelet[2500]: W0706 23:56:18.056961 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.057015 kubelet[2500]: E0706 23:56:18.056985 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.057292 kubelet[2500]: E0706 23:56:18.057263 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.057292 kubelet[2500]: W0706 23:56:18.057279 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.057368 kubelet[2500]: E0706 23:56:18.057297 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.057570 kubelet[2500]: E0706 23:56:18.057549 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.057570 kubelet[2500]: W0706 23:56:18.057564 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.057641 kubelet[2500]: E0706 23:56:18.057603 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.057908 kubelet[2500]: E0706 23:56:18.057886 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.057908 kubelet[2500]: W0706 23:56:18.057902 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.057991 kubelet[2500]: E0706 23:56:18.057941 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.058208 kubelet[2500]: E0706 23:56:18.058169 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.058208 kubelet[2500]: W0706 23:56:18.058186 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.058292 kubelet[2500]: E0706 23:56:18.058232 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.059388 kubelet[2500]: E0706 23:56:18.059211 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.059388 kubelet[2500]: W0706 23:56:18.059242 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.059388 kubelet[2500]: E0706 23:56:18.059282 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.059673 kubelet[2500]: E0706 23:56:18.059621 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.059673 kubelet[2500]: W0706 23:56:18.059654 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.059673 kubelet[2500]: E0706 23:56:18.059684 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.060047 kubelet[2500]: E0706 23:56:18.060024 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.060047 kubelet[2500]: W0706 23:56:18.060041 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.060138 kubelet[2500]: E0706 23:56:18.060052 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.060292 kubelet[2500]: E0706 23:56:18.060263 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.060292 kubelet[2500]: W0706 23:56:18.060283 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.060292 kubelet[2500]: E0706 23:56:18.060296 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.060889 kubelet[2500]: E0706 23:56:18.060489 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.060889 kubelet[2500]: W0706 23:56:18.060498 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.060889 kubelet[2500]: E0706 23:56:18.060507 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.060889 kubelet[2500]: E0706 23:56:18.060788 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.060889 kubelet[2500]: W0706 23:56:18.060797 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.060889 kubelet[2500]: E0706 23:56:18.060806 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:18.061275 kubelet[2500]: E0706 23:56:18.061253 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:18.061275 kubelet[2500]: W0706 23:56:18.061270 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:18.061371 kubelet[2500]: E0706 23:56:18.061285 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.032494 kubelet[2500]: I0706 23:56:19.032442 2500 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:56:19.033010 kubelet[2500]: E0706 23:56:19.032867 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:19.050544 kubelet[2500]: E0706 23:56:19.050513 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.050544 kubelet[2500]: W0706 23:56:19.050532 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.050662 kubelet[2500]: E0706 23:56:19.050551 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.051398 kubelet[2500]: E0706 23:56:19.051372 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.051398 kubelet[2500]: W0706 23:56:19.051387 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.051398 kubelet[2500]: E0706 23:56:19.051398 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.051653 kubelet[2500]: E0706 23:56:19.051630 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.051653 kubelet[2500]: W0706 23:56:19.051642 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.051653 kubelet[2500]: E0706 23:56:19.051651 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.051952 kubelet[2500]: E0706 23:56:19.051929 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.051952 kubelet[2500]: W0706 23:56:19.051942 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.051952 kubelet[2500]: E0706 23:56:19.051950 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.052311 kubelet[2500]: E0706 23:56:19.052289 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.052311 kubelet[2500]: W0706 23:56:19.052303 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.052311 kubelet[2500]: E0706 23:56:19.052311 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.052539 kubelet[2500]: E0706 23:56:19.052518 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.052539 kubelet[2500]: W0706 23:56:19.052530 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.052539 kubelet[2500]: E0706 23:56:19.052538 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.052782 kubelet[2500]: E0706 23:56:19.052767 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.052782 kubelet[2500]: W0706 23:56:19.052778 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.052840 kubelet[2500]: E0706 23:56:19.052786 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.053047 kubelet[2500]: E0706 23:56:19.053024 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.053047 kubelet[2500]: W0706 23:56:19.053038 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.053047 kubelet[2500]: E0706 23:56:19.053047 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.053283 kubelet[2500]: E0706 23:56:19.053269 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.053283 kubelet[2500]: W0706 23:56:19.053280 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.053346 kubelet[2500]: E0706 23:56:19.053288 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.053508 kubelet[2500]: E0706 23:56:19.053495 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.053508 kubelet[2500]: W0706 23:56:19.053505 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.053558 kubelet[2500]: E0706 23:56:19.053513 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.053746 kubelet[2500]: E0706 23:56:19.053731 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.053746 kubelet[2500]: W0706 23:56:19.053742 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.053797 kubelet[2500]: E0706 23:56:19.053750 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.053984 kubelet[2500]: E0706 23:56:19.053970 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.053984 kubelet[2500]: W0706 23:56:19.053981 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.054033 kubelet[2500]: E0706 23:56:19.053990 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.054248 kubelet[2500]: E0706 23:56:19.054224 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.054248 kubelet[2500]: W0706 23:56:19.054238 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.054248 kubelet[2500]: E0706 23:56:19.054247 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.054485 kubelet[2500]: E0706 23:56:19.054471 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.054485 kubelet[2500]: W0706 23:56:19.054482 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.054532 kubelet[2500]: E0706 23:56:19.054490 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.054777 kubelet[2500]: E0706 23:56:19.054762 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.054777 kubelet[2500]: W0706 23:56:19.054774 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.054831 kubelet[2500]: E0706 23:56:19.054783 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.064376 kubelet[2500]: E0706 23:56:19.064333 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.064376 kubelet[2500]: W0706 23:56:19.064367 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.064563 kubelet[2500]: E0706 23:56:19.064398 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.064728 kubelet[2500]: E0706 23:56:19.064690 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.064756 kubelet[2500]: W0706 23:56:19.064707 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.064797 kubelet[2500]: E0706 23:56:19.064757 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.065075 kubelet[2500]: E0706 23:56:19.065046 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.065075 kubelet[2500]: W0706 23:56:19.065064 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.065135 kubelet[2500]: E0706 23:56:19.065085 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.065372 kubelet[2500]: E0706 23:56:19.065339 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.065372 kubelet[2500]: W0706 23:56:19.065362 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.065472 kubelet[2500]: E0706 23:56:19.065379 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.065603 kubelet[2500]: E0706 23:56:19.065582 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.065603 kubelet[2500]: W0706 23:56:19.065593 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.065651 kubelet[2500]: E0706 23:56:19.065605 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.065804 kubelet[2500]: E0706 23:56:19.065789 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.065804 kubelet[2500]: W0706 23:56:19.065800 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.065860 kubelet[2500]: E0706 23:56:19.065813 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.066114 kubelet[2500]: E0706 23:56:19.066091 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.066114 kubelet[2500]: W0706 23:56:19.066103 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.066165 kubelet[2500]: E0706 23:56:19.066132 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.066344 kubelet[2500]: E0706 23:56:19.066329 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.066344 kubelet[2500]: W0706 23:56:19.066340 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.066394 kubelet[2500]: E0706 23:56:19.066364 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.066539 kubelet[2500]: E0706 23:56:19.066525 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.066539 kubelet[2500]: W0706 23:56:19.066535 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.066580 kubelet[2500]: E0706 23:56:19.066559 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.066732 kubelet[2500]: E0706 23:56:19.066706 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.066732 kubelet[2500]: W0706 23:56:19.066729 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.066784 kubelet[2500]: E0706 23:56:19.066742 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.066956 kubelet[2500]: E0706 23:56:19.066940 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.066956 kubelet[2500]: W0706 23:56:19.066953 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.066998 kubelet[2500]: E0706 23:56:19.066966 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.067157 kubelet[2500]: E0706 23:56:19.067144 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.067157 kubelet[2500]: W0706 23:56:19.067154 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.067205 kubelet[2500]: E0706 23:56:19.067167 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.067405 kubelet[2500]: E0706 23:56:19.067390 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.067405 kubelet[2500]: W0706 23:56:19.067400 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.067465 kubelet[2500]: E0706 23:56:19.067412 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.067734 kubelet[2500]: E0706 23:56:19.067694 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.067770 kubelet[2500]: W0706 23:56:19.067736 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.067770 kubelet[2500]: E0706 23:56:19.067759 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.068015 kubelet[2500]: E0706 23:56:19.067992 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.068015 kubelet[2500]: W0706 23:56:19.068004 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.068015 kubelet[2500]: E0706 23:56:19.068016 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.068240 kubelet[2500]: E0706 23:56:19.068223 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.068240 kubelet[2500]: W0706 23:56:19.068236 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.068307 kubelet[2500]: E0706 23:56:19.068251 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.068499 kubelet[2500]: E0706 23:56:19.068484 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.068499 kubelet[2500]: W0706 23:56:19.068496 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.068541 kubelet[2500]: E0706 23:56:19.068504 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.068910 kubelet[2500]: E0706 23:56:19.068894 2500 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:56:19.068910 kubelet[2500]: W0706 23:56:19.068906 2500 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:56:19.068970 kubelet[2500]: E0706 23:56:19.068915 2500 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:56:19.599422 containerd[1456]: time="2025-07-06T23:56:19.599324544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:19.600334 containerd[1456]: time="2025-07-06T23:56:19.600256128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:56:19.602337 containerd[1456]: time="2025-07-06T23:56:19.602275929Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:19.605164 containerd[1456]: time="2025-07-06T23:56:19.605101028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:19.606042 containerd[1456]: time="2025-07-06T23:56:19.605990942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.372504941s" Jul 6 23:56:19.606042 containerd[1456]: time="2025-07-06T23:56:19.606044426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:56:19.608632 containerd[1456]: time="2025-07-06T23:56:19.608584313Z" level=info msg="CreateContainer within sandbox \"2284be865e3339dbd61747c76841538a3ea100266d9e3b73bdd3146c5cfb0e9c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:56:19.625204 containerd[1456]: time="2025-07-06T23:56:19.625142463Z" level=info msg="CreateContainer within sandbox \"2284be865e3339dbd61747c76841538a3ea100266d9e3b73bdd3146c5cfb0e9c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0f5347825b17fcd52b74a13ef83c2cb35e7eb5ca77783e174b18a9c01f5ca6c6\"" Jul 6 23:56:19.625764 containerd[1456]: time="2025-07-06T23:56:19.625734650Z" level=info msg="StartContainer for \"0f5347825b17fcd52b74a13ef83c2cb35e7eb5ca77783e174b18a9c01f5ca6c6\"" Jul 6 23:56:19.670939 systemd[1]: Started cri-containerd-0f5347825b17fcd52b74a13ef83c2cb35e7eb5ca77783e174b18a9c01f5ca6c6.scope - libcontainer container 0f5347825b17fcd52b74a13ef83c2cb35e7eb5ca77783e174b18a9c01f5ca6c6. Jul 6 23:56:19.717349 containerd[1456]: time="2025-07-06T23:56:19.717274968Z" level=info msg="StartContainer for \"0f5347825b17fcd52b74a13ef83c2cb35e7eb5ca77783e174b18a9c01f5ca6c6\" returns successfully" Jul 6 23:56:19.730466 systemd[1]: cri-containerd-0f5347825b17fcd52b74a13ef83c2cb35e7eb5ca77783e174b18a9c01f5ca6c6.scope: Deactivated successfully. Jul 6 23:56:19.760729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f5347825b17fcd52b74a13ef83c2cb35e7eb5ca77783e174b18a9c01f5ca6c6-rootfs.mount: Deactivated successfully. Jul 6 23:56:19.828838 kubelet[2500]: E0706 23:56:19.828761 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jzdf" podUID="573e83ed-8e01-4333-9a22-d115fe0e7655" Jul 6 23:56:20.093327 containerd[1456]: time="2025-07-06T23:56:20.089426174Z" level=info msg="shim disconnected" id=0f5347825b17fcd52b74a13ef83c2cb35e7eb5ca77783e174b18a9c01f5ca6c6 namespace=k8s.io Jul 6 23:56:20.093654 containerd[1456]: time="2025-07-06T23:56:20.093340806Z" level=warning msg="cleaning up after shim disconnected" id=0f5347825b17fcd52b74a13ef83c2cb35e7eb5ca77783e174b18a9c01f5ca6c6 namespace=k8s.io Jul 6 23:56:20.093654 containerd[1456]: time="2025-07-06T23:56:20.093363039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:21.011275 containerd[1456]: time="2025-07-06T23:56:21.011222402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:56:21.829084 kubelet[2500]: E0706 23:56:21.829000 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jzdf" podUID="573e83ed-8e01-4333-9a22-d115fe0e7655" Jul 6 23:56:23.829243 kubelet[2500]: E0706 23:56:23.829194 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jzdf" podUID="573e83ed-8e01-4333-9a22-d115fe0e7655" Jul 6 23:56:25.829130 kubelet[2500]: E0706 23:56:25.829055 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jzdf" podUID="573e83ed-8e01-4333-9a22-d115fe0e7655" Jul 6 23:56:26.152199 containerd[1456]: time="2025-07-06T23:56:26.152005452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:26.153326 containerd[1456]: time="2025-07-06T23:56:26.153242473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:56:26.154846 containerd[1456]: time="2025-07-06T23:56:26.154803679Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:26.158709 containerd[1456]: time="2025-07-06T23:56:26.158664681Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:26.159478 containerd[1456]: time="2025-07-06T23:56:26.159437309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.14817535s" Jul 6 23:56:26.159478 containerd[1456]: time="2025-07-06T23:56:26.159471965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:56:26.162064 containerd[1456]: time="2025-07-06T23:56:26.162024449Z" level=info msg="CreateContainer within sandbox \"2284be865e3339dbd61747c76841538a3ea100266d9e3b73bdd3146c5cfb0e9c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:56:26.185499 containerd[1456]: time="2025-07-06T23:56:26.185443041Z" level=info msg="CreateContainer within sandbox \"2284be865e3339dbd61747c76841538a3ea100266d9e3b73bdd3146c5cfb0e9c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4085d4d8b50681258e73d57c6ceb7b6eba8120025a91652e8c3ae5105eeedbac\"" Jul 6 23:56:26.186114 containerd[1456]: time="2025-07-06T23:56:26.186049538Z" level=info msg="StartContainer for \"4085d4d8b50681258e73d57c6ceb7b6eba8120025a91652e8c3ae5105eeedbac\"" Jul 6 23:56:26.223858 systemd[1]: Started cri-containerd-4085d4d8b50681258e73d57c6ceb7b6eba8120025a91652e8c3ae5105eeedbac.scope - libcontainer container 4085d4d8b50681258e73d57c6ceb7b6eba8120025a91652e8c3ae5105eeedbac. Jul 6 23:56:26.260454 containerd[1456]: time="2025-07-06T23:56:26.260408683Z" level=info msg="StartContainer for \"4085d4d8b50681258e73d57c6ceb7b6eba8120025a91652e8c3ae5105eeedbac\" returns successfully" Jul 6 23:56:27.827515 systemd[1]: cri-containerd-4085d4d8b50681258e73d57c6ceb7b6eba8120025a91652e8c3ae5105eeedbac.scope: Deactivated successfully. Jul 6 23:56:27.830016 kubelet[2500]: E0706 23:56:27.828691 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8jzdf" podUID="573e83ed-8e01-4333-9a22-d115fe0e7655" Jul 6 23:56:27.843914 kubelet[2500]: I0706 23:56:27.843095 2500 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:56:27.862174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4085d4d8b50681258e73d57c6ceb7b6eba8120025a91652e8c3ae5105eeedbac-rootfs.mount: Deactivated successfully. Jul 6 23:56:27.884474 systemd[1]: Created slice kubepods-besteffort-pod6a268ee2_aef1_470f_80a4_afd51b82bfec.slice - libcontainer container kubepods-besteffort-pod6a268ee2_aef1_470f_80a4_afd51b82bfec.slice. Jul 6 23:56:27.891398 systemd[1]: Created slice kubepods-burstable-poddd93b232_dbe2_459a_a97f_dd73be2c49bc.slice - libcontainer container kubepods-burstable-poddd93b232_dbe2_459a_a97f_dd73be2c49bc.slice. Jul 6 23:56:27.897839 systemd[1]: Created slice kubepods-besteffort-pod4d8809a0_b8bb_4f42_8da7_d29046d2f152.slice - libcontainer container kubepods-besteffort-pod4d8809a0_b8bb_4f42_8da7_d29046d2f152.slice. Jul 6 23:56:27.901931 systemd[1]: Created slice kubepods-burstable-pod580e7206_665f_4270_aab2_39eaf9dc4990.slice - libcontainer container kubepods-burstable-pod580e7206_665f_4270_aab2_39eaf9dc4990.slice. Jul 6 23:56:27.907211 systemd[1]: Created slice kubepods-besteffort-pode80de9bd_3003_481f_b571_cacd2045a049.slice - libcontainer container kubepods-besteffort-pode80de9bd_3003_481f_b571_cacd2045a049.slice. Jul 6 23:56:27.913234 systemd[1]: Created slice kubepods-besteffort-pod0911fce1_3f9f_4337_b200_a55b72bf320f.slice - libcontainer container kubepods-besteffort-pod0911fce1_3f9f_4337_b200_a55b72bf320f.slice. Jul 6 23:56:27.921330 systemd[1]: Created slice kubepods-besteffort-pod0aebe8d7_c736_40fd_a06c_2169cc2c7e1f.slice - libcontainer container kubepods-besteffort-pod0aebe8d7_c736_40fd_a06c_2169cc2c7e1f.slice. Jul 6 23:56:27.922196 containerd[1456]: time="2025-07-06T23:56:27.922004760Z" level=info msg="shim disconnected" id=4085d4d8b50681258e73d57c6ceb7b6eba8120025a91652e8c3ae5105eeedbac namespace=k8s.io Jul 6 23:56:27.922564 containerd[1456]: time="2025-07-06T23:56:27.922326619Z" level=warning msg="cleaning up after shim disconnected" id=4085d4d8b50681258e73d57c6ceb7b6eba8120025a91652e8c3ae5105eeedbac namespace=k8s.io Jul 6 23:56:27.922564 containerd[1456]: time="2025-07-06T23:56:27.922342851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:27.935796 kubelet[2500]: I0706 23:56:27.935172 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8jwh\" (UniqueName: \"kubernetes.io/projected/0911fce1-3f9f-4337-b200-a55b72bf320f-kube-api-access-r8jwh\") pod \"calico-apiserver-5cf7666946-9ld94\" (UID: \"0911fce1-3f9f-4337-b200-a55b72bf320f\") " pod="calico-apiserver/calico-apiserver-5cf7666946-9ld94" Jul 6 23:56:27.936844 kubelet[2500]: I0706 23:56:27.935896 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klwls\" (UniqueName: \"kubernetes.io/projected/4d8809a0-b8bb-4f42-8da7-d29046d2f152-kube-api-access-klwls\") pod \"calico-kube-controllers-5b86b4658f-c279f\" (UID: \"4d8809a0-b8bb-4f42-8da7-d29046d2f152\") " pod="calico-system/calico-kube-controllers-5b86b4658f-c279f" Jul 6 23:56:27.936844 kubelet[2500]: I0706 23:56:27.935928 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/580e7206-665f-4270-aab2-39eaf9dc4990-config-volume\") pod \"coredns-668d6bf9bc-4rlpb\" (UID: \"580e7206-665f-4270-aab2-39eaf9dc4990\") " pod="kube-system/coredns-668d6bf9bc-4rlpb" Jul 6 23:56:27.936844 kubelet[2500]: I0706 23:56:27.935944 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btmj2\" (UniqueName: \"kubernetes.io/projected/6a268ee2-aef1-470f-80a4-afd51b82bfec-kube-api-access-btmj2\") pod \"whisker-8666bcbf65-nclb4\" (UID: \"6a268ee2-aef1-470f-80a4-afd51b82bfec\") " pod="calico-system/whisker-8666bcbf65-nclb4" Jul 6 23:56:27.936844 kubelet[2500]: I0706 23:56:27.935960 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd93b232-dbe2-459a-a97f-dd73be2c49bc-config-volume\") pod \"coredns-668d6bf9bc-29zf2\" (UID: \"dd93b232-dbe2-459a-a97f-dd73be2c49bc\") " pod="kube-system/coredns-668d6bf9bc-29zf2" Jul 6 23:56:27.936844 kubelet[2500]: I0706 23:56:27.935978 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80de9bd-3003-481f-b571-cacd2045a049-config\") pod \"goldmane-768f4c5c69-zk8l4\" (UID: \"e80de9bd-3003-481f-b571-cacd2045a049\") " pod="calico-system/goldmane-768f4c5c69-zk8l4" Jul 6 23:56:27.937223 kubelet[2500]: I0706 23:56:27.935998 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0911fce1-3f9f-4337-b200-a55b72bf320f-calico-apiserver-certs\") pod \"calico-apiserver-5cf7666946-9ld94\" (UID: \"0911fce1-3f9f-4337-b200-a55b72bf320f\") " pod="calico-apiserver/calico-apiserver-5cf7666946-9ld94" Jul 6 23:56:27.937223 kubelet[2500]: I0706 23:56:27.936013 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80de9bd-3003-481f-b571-cacd2045a049-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-zk8l4\" (UID: \"e80de9bd-3003-481f-b571-cacd2045a049\") " pod="calico-system/goldmane-768f4c5c69-zk8l4" Jul 6 23:56:27.937223 kubelet[2500]: I0706 23:56:27.936030 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e80de9bd-3003-481f-b571-cacd2045a049-goldmane-key-pair\") pod \"goldmane-768f4c5c69-zk8l4\" (UID: \"e80de9bd-3003-481f-b571-cacd2045a049\") " pod="calico-system/goldmane-768f4c5c69-zk8l4" Jul 6 23:56:27.937223 kubelet[2500]: I0706 23:56:27.936081 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d8809a0-b8bb-4f42-8da7-d29046d2f152-tigera-ca-bundle\") pod \"calico-kube-controllers-5b86b4658f-c279f\" (UID: \"4d8809a0-b8bb-4f42-8da7-d29046d2f152\") " pod="calico-system/calico-kube-controllers-5b86b4658f-c279f" Jul 6 23:56:27.937223 kubelet[2500]: I0706 23:56:27.936105 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c95j7\" (UniqueName: \"kubernetes.io/projected/580e7206-665f-4270-aab2-39eaf9dc4990-kube-api-access-c95j7\") pod \"coredns-668d6bf9bc-4rlpb\" (UID: \"580e7206-665f-4270-aab2-39eaf9dc4990\") " pod="kube-system/coredns-668d6bf9bc-4rlpb" Jul 6 23:56:27.937433 kubelet[2500]: I0706 23:56:27.936123 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s46q\" (UniqueName: \"kubernetes.io/projected/dd93b232-dbe2-459a-a97f-dd73be2c49bc-kube-api-access-5s46q\") pod \"coredns-668d6bf9bc-29zf2\" (UID: \"dd93b232-dbe2-459a-a97f-dd73be2c49bc\") " pod="kube-system/coredns-668d6bf9bc-29zf2" Jul 6 23:56:27.937433 kubelet[2500]: I0706 23:56:27.936147 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqz9b\" (UniqueName: \"kubernetes.io/projected/0aebe8d7-c736-40fd-a06c-2169cc2c7e1f-kube-api-access-cqz9b\") pod \"calico-apiserver-5cf7666946-t2wmg\" (UID: \"0aebe8d7-c736-40fd-a06c-2169cc2c7e1f\") " pod="calico-apiserver/calico-apiserver-5cf7666946-t2wmg" Jul 6 23:56:27.937433 kubelet[2500]: I0706 23:56:27.936166 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqp96\" (UniqueName: \"kubernetes.io/projected/e80de9bd-3003-481f-b571-cacd2045a049-kube-api-access-kqp96\") pod \"goldmane-768f4c5c69-zk8l4\" (UID: \"e80de9bd-3003-481f-b571-cacd2045a049\") " pod="calico-system/goldmane-768f4c5c69-zk8l4" Jul 6 23:56:27.937433 kubelet[2500]: I0706 23:56:27.936182 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a268ee2-aef1-470f-80a4-afd51b82bfec-whisker-backend-key-pair\") pod \"whisker-8666bcbf65-nclb4\" (UID: \"6a268ee2-aef1-470f-80a4-afd51b82bfec\") " pod="calico-system/whisker-8666bcbf65-nclb4" Jul 6 23:56:27.937433 kubelet[2500]: I0706 23:56:27.936202 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a268ee2-aef1-470f-80a4-afd51b82bfec-whisker-ca-bundle\") pod \"whisker-8666bcbf65-nclb4\" (UID: \"6a268ee2-aef1-470f-80a4-afd51b82bfec\") " pod="calico-system/whisker-8666bcbf65-nclb4" Jul 6 23:56:27.937625 kubelet[2500]: I0706 23:56:27.936219 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0aebe8d7-c736-40fd-a06c-2169cc2c7e1f-calico-apiserver-certs\") pod \"calico-apiserver-5cf7666946-t2wmg\" (UID: \"0aebe8d7-c736-40fd-a06c-2169cc2c7e1f\") " pod="calico-apiserver/calico-apiserver-5cf7666946-t2wmg" Jul 6 23:56:28.028926 containerd[1456]: time="2025-07-06T23:56:28.028864808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:56:28.219846 kubelet[2500]: E0706 23:56:28.218961 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:28.220852 kubelet[2500]: E0706 23:56:28.220036 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:28.220925 containerd[1456]: time="2025-07-06T23:56:28.220214424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b86b4658f-c279f,Uid:4d8809a0-b8bb-4f42-8da7-d29046d2f152,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:28.220925 containerd[1456]: time="2025-07-06T23:56:28.220269138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rlpb,Uid:580e7206-665f-4270-aab2-39eaf9dc4990,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:28.220983 containerd[1456]: time="2025-07-06T23:56:28.220970146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-zk8l4,Uid:e80de9bd-3003-481f-b571-cacd2045a049,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:28.221107 containerd[1456]: time="2025-07-06T23:56:28.221085318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8666bcbf65-nclb4,Uid:6a268ee2-aef1-470f-80a4-afd51b82bfec,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:28.221275 containerd[1456]: time="2025-07-06T23:56:28.221111127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-29zf2,Uid:dd93b232-dbe2-459a-a97f-dd73be2c49bc,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:28.221451 containerd[1456]: time="2025-07-06T23:56:28.221118332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf7666946-9ld94,Uid:0911fce1-3f9f-4337-b200-a55b72bf320f,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:56:28.227942 containerd[1456]: time="2025-07-06T23:56:28.227892005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf7666946-t2wmg,Uid:0aebe8d7-c736-40fd-a06c-2169cc2c7e1f,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:56:28.409555 kubelet[2500]: I0706 23:56:28.409410 2500 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:56:28.410472 kubelet[2500]: E0706 23:56:28.409847 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:28.460695 containerd[1456]: time="2025-07-06T23:56:28.460645747Z" level=error msg="Failed to destroy network for sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.466501 containerd[1456]: time="2025-07-06T23:56:28.466419488Z" level=error msg="Failed to destroy network for sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.468735 containerd[1456]: time="2025-07-06T23:56:28.468646240Z" level=error msg="encountered an error cleaning up failed sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.468998 containerd[1456]: time="2025-07-06T23:56:28.468832318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-zk8l4,Uid:e80de9bd-3003-481f-b571-cacd2045a049,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.470669 containerd[1456]: time="2025-07-06T23:56:28.470528118Z" level=error msg="encountered an error cleaning up failed sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.470951 containerd[1456]: time="2025-07-06T23:56:28.470856028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rlpb,Uid:580e7206-665f-4270-aab2-39eaf9dc4990,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.479815 containerd[1456]: time="2025-07-06T23:56:28.479675606Z" level=error msg="Failed to destroy network for sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.480210 containerd[1456]: time="2025-07-06T23:56:28.480185937Z" level=error msg="encountered an error cleaning up failed sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.480323 containerd[1456]: time="2025-07-06T23:56:28.480302009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-29zf2,Uid:dd93b232-dbe2-459a-a97f-dd73be2c49bc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.490397 kubelet[2500]: E0706 23:56:28.490339 2500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.490856 kubelet[2500]: E0706 23:56:28.490664 2500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-29zf2" Jul 6 23:56:28.490856 kubelet[2500]: E0706 23:56:28.490725 2500 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-29zf2" Jul 6 23:56:28.490856 kubelet[2500]: E0706 23:56:28.490796 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-29zf2_kube-system(dd93b232-dbe2-459a-a97f-dd73be2c49bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-29zf2_kube-system(dd93b232-dbe2-459a-a97f-dd73be2c49bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-29zf2" podUID="dd93b232-dbe2-459a-a97f-dd73be2c49bc" Jul 6 23:56:28.493231 kubelet[2500]: E0706 23:56:28.490454 2500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.493231 kubelet[2500]: E0706 23:56:28.492820 2500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-zk8l4" Jul 6 23:56:28.493231 kubelet[2500]: E0706 23:56:28.492853 2500 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-zk8l4" Jul 6 23:56:28.493376 kubelet[2500]: E0706 23:56:28.492908 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-zk8l4_calico-system(e80de9bd-3003-481f-b571-cacd2045a049)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-zk8l4_calico-system(e80de9bd-3003-481f-b571-cacd2045a049)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-zk8l4" podUID="e80de9bd-3003-481f-b571-cacd2045a049" Jul 6 23:56:28.493376 kubelet[2500]: E0706 23:56:28.490402 2500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.493376 kubelet[2500]: E0706 23:56:28.492957 2500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4rlpb" Jul 6 23:56:28.493631 kubelet[2500]: E0706 23:56:28.492968 2500 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4rlpb" Jul 6 23:56:28.493631 kubelet[2500]: E0706 23:56:28.492988 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4rlpb_kube-system(580e7206-665f-4270-aab2-39eaf9dc4990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4rlpb_kube-system(580e7206-665f-4270-aab2-39eaf9dc4990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4rlpb" podUID="580e7206-665f-4270-aab2-39eaf9dc4990" Jul 6 23:56:28.501402 containerd[1456]: time="2025-07-06T23:56:28.501341222Z" level=error msg="Failed to destroy network for sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.502303 containerd[1456]: time="2025-07-06T23:56:28.502251121Z" level=error msg="encountered an error cleaning up failed sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.502303 containerd[1456]: time="2025-07-06T23:56:28.502309343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf7666946-9ld94,Uid:0911fce1-3f9f-4337-b200-a55b72bf320f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.502590 kubelet[2500]: E0706 23:56:28.502543 2500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.502667 kubelet[2500]: E0706 23:56:28.502605 2500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf7666946-9ld94" Jul 6 23:56:28.502667 kubelet[2500]: E0706 23:56:28.502623 2500 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf7666946-9ld94" Jul 6 23:56:28.502733 kubelet[2500]: E0706 23:56:28.502658 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf7666946-9ld94_calico-apiserver(0911fce1-3f9f-4337-b200-a55b72bf320f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf7666946-9ld94_calico-apiserver(0911fce1-3f9f-4337-b200-a55b72bf320f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf7666946-9ld94" podUID="0911fce1-3f9f-4337-b200-a55b72bf320f" Jul 6 23:56:28.505061 containerd[1456]: time="2025-07-06T23:56:28.504259142Z" level=error msg="Failed to destroy network for sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.505583 containerd[1456]: time="2025-07-06T23:56:28.505557819Z" level=error msg="encountered an error cleaning up failed sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.505878 containerd[1456]: time="2025-07-06T23:56:28.505689121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf7666946-t2wmg,Uid:0aebe8d7-c736-40fd-a06c-2169cc2c7e1f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.506068 containerd[1456]: time="2025-07-06T23:56:28.505825223Z" level=error msg="Failed to destroy network for sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.506398 kubelet[2500]: E0706 23:56:28.506190 2500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.506398 kubelet[2500]: E0706 23:56:28.506222 2500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf7666946-t2wmg" Jul 6 23:56:28.506398 kubelet[2500]: E0706 23:56:28.506238 2500 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf7666946-t2wmg" Jul 6 23:56:28.507630 kubelet[2500]: E0706 23:56:28.506266 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf7666946-t2wmg_calico-apiserver(0aebe8d7-c736-40fd-a06c-2169cc2c7e1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf7666946-t2wmg_calico-apiserver(0aebe8d7-c736-40fd-a06c-2169cc2c7e1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf7666946-t2wmg" podUID="0aebe8d7-c736-40fd-a06c-2169cc2c7e1f" Jul 6 23:56:28.508092 containerd[1456]: time="2025-07-06T23:56:28.508033729Z" level=error msg="encountered an error cleaning up failed sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.508208 containerd[1456]: time="2025-07-06T23:56:28.508186523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8666bcbf65-nclb4,Uid:6a268ee2-aef1-470f-80a4-afd51b82bfec,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.508410 kubelet[2500]: E0706 23:56:28.508389 2500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.508524 kubelet[2500]: E0706 23:56:28.508507 2500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8666bcbf65-nclb4" Jul 6 23:56:28.508642 kubelet[2500]: E0706 23:56:28.508576 2500 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8666bcbf65-nclb4" Jul 6 23:56:28.508642 kubelet[2500]: E0706 23:56:28.508609 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8666bcbf65-nclb4_calico-system(6a268ee2-aef1-470f-80a4-afd51b82bfec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8666bcbf65-nclb4_calico-system(6a268ee2-aef1-470f-80a4-afd51b82bfec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8666bcbf65-nclb4" podUID="6a268ee2-aef1-470f-80a4-afd51b82bfec" Jul 6 23:56:28.517858 containerd[1456]: time="2025-07-06T23:56:28.517809024Z" level=error msg="Failed to destroy network for sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.518271 containerd[1456]: time="2025-07-06T23:56:28.518234101Z" level=error msg="encountered an error cleaning up failed sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.518312 containerd[1456]: time="2025-07-06T23:56:28.518286912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b86b4658f-c279f,Uid:4d8809a0-b8bb-4f42-8da7-d29046d2f152,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.518498 kubelet[2500]: E0706 23:56:28.518448 2500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:28.518498 kubelet[2500]: E0706 23:56:28.518484 2500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b86b4658f-c279f" Jul 6 23:56:28.518498 kubelet[2500]: E0706 23:56:28.518500 2500 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b86b4658f-c279f" Jul 6 23:56:28.518739 kubelet[2500]: E0706 23:56:28.518533 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b86b4658f-c279f_calico-system(4d8809a0-b8bb-4f42-8da7-d29046d2f152)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b86b4658f-c279f_calico-system(4d8809a0-b8bb-4f42-8da7-d29046d2f152)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b86b4658f-c279f" podUID="4d8809a0-b8bb-4f42-8da7-d29046d2f152" Jul 6 23:56:29.035373 kubelet[2500]: I0706 23:56:29.035325 2500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:29.036300 kubelet[2500]: I0706 23:56:29.036281 2500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:29.037825 kubelet[2500]: I0706 23:56:29.037424 2500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:29.040078 kubelet[2500]: I0706 23:56:29.039758 2500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:29.050013 containerd[1456]: time="2025-07-06T23:56:29.049942848Z" level=info msg="StopPodSandbox for \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\"" Jul 6 23:56:29.050449 containerd[1456]: time="2025-07-06T23:56:29.050423742Z" level=info msg="StopPodSandbox for \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\"" Jul 6 23:56:29.050646 containerd[1456]: time="2025-07-06T23:56:29.050622413Z" level=info msg="StopPodSandbox for \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\"" Jul 6 23:56:29.050935 containerd[1456]: time="2025-07-06T23:56:29.049991200Z" level=info msg="StopPodSandbox for \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\"" Jul 6 23:56:29.051886 containerd[1456]: time="2025-07-06T23:56:29.051858448Z" level=info msg="Ensure that sandbox 51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3 in task-service has been cleanup successfully" Jul 6 23:56:29.051972 containerd[1456]: time="2025-07-06T23:56:29.051924174Z" level=info msg="Ensure that sandbox 7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee in task-service has been cleanup successfully" Jul 6 23:56:29.052205 containerd[1456]: time="2025-07-06T23:56:29.052034888Z" level=info msg="Ensure that sandbox ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3 in task-service has been cleanup successfully" Jul 6 23:56:29.052292 kubelet[2500]: I0706 23:56:29.052206 2500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:29.053551 containerd[1456]: time="2025-07-06T23:56:29.053484944Z" level=info msg="StopPodSandbox for \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\"" Jul 6 23:56:29.055208 containerd[1456]: time="2025-07-06T23:56:29.055180362Z" level=info msg="Ensure that sandbox c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f in task-service has been cleanup successfully" Jul 6 23:56:29.060395 containerd[1456]: time="2025-07-06T23:56:29.051864020Z" level=info msg="Ensure that sandbox a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d in task-service has been cleanup successfully" Jul 6 23:56:29.062591 kubelet[2500]: I0706 23:56:29.062560 2500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:29.064955 containerd[1456]: time="2025-07-06T23:56:29.064459272Z" level=info msg="StopPodSandbox for \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\"" Jul 6 23:56:29.064955 containerd[1456]: time="2025-07-06T23:56:29.064640821Z" level=info msg="Ensure that sandbox b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967 in task-service has been cleanup successfully" Jul 6 23:56:29.068424 kubelet[2500]: I0706 23:56:29.068383 2500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:29.068747 kubelet[2500]: E0706 23:56:29.068708 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:29.070142 containerd[1456]: time="2025-07-06T23:56:29.070075565Z" level=info msg="StopPodSandbox for \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\"" Jul 6 23:56:29.070428 containerd[1456]: time="2025-07-06T23:56:29.070402313Z" level=info msg="Ensure that sandbox 354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6 in task-service has been cleanup successfully" Jul 6 23:56:29.105179 containerd[1456]: time="2025-07-06T23:56:29.105107593Z" level=error msg="StopPodSandbox for \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\" failed" error="failed to destroy network for sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.105548 kubelet[2500]: E0706 23:56:29.105408 2500 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:29.105548 kubelet[2500]: E0706 23:56:29.105483 2500 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6"} Jul 6 23:56:29.105670 kubelet[2500]: E0706 23:56:29.105552 2500 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"580e7206-665f-4270-aab2-39eaf9dc4990\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:29.105670 kubelet[2500]: E0706 23:56:29.105587 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"580e7206-665f-4270-aab2-39eaf9dc4990\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4rlpb" podUID="580e7206-665f-4270-aab2-39eaf9dc4990" Jul 6 23:56:29.114239 containerd[1456]: time="2025-07-06T23:56:29.113255149Z" level=error msg="StopPodSandbox for \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\" failed" error="failed to destroy network for sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.114406 kubelet[2500]: E0706 23:56:29.113555 2500 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:29.114406 kubelet[2500]: E0706 23:56:29.113611 2500 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f"} Jul 6 23:56:29.114406 kubelet[2500]: E0706 23:56:29.113652 2500 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0911fce1-3f9f-4337-b200-a55b72bf320f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:29.114406 kubelet[2500]: E0706 23:56:29.113676 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0911fce1-3f9f-4337-b200-a55b72bf320f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf7666946-9ld94" podUID="0911fce1-3f9f-4337-b200-a55b72bf320f" Jul 6 23:56:29.117212 containerd[1456]: time="2025-07-06T23:56:29.117172116Z" level=error msg="StopPodSandbox for \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\" failed" error="failed to destroy network for sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.117443 kubelet[2500]: E0706 23:56:29.117398 2500 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:29.117488 kubelet[2500]: E0706 23:56:29.117453 2500 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3"} Jul 6 23:56:29.117523 kubelet[2500]: E0706 23:56:29.117485 2500 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0aebe8d7-c736-40fd-a06c-2169cc2c7e1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:29.117523 kubelet[2500]: E0706 23:56:29.117508 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0aebe8d7-c736-40fd-a06c-2169cc2c7e1f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf7666946-t2wmg" podUID="0aebe8d7-c736-40fd-a06c-2169cc2c7e1f" Jul 6 23:56:29.125242 containerd[1456]: time="2025-07-06T23:56:29.125161239Z" level=error msg="StopPodSandbox for \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\" failed" error="failed to destroy network for sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.125488 kubelet[2500]: E0706 23:56:29.125437 2500 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:29.125778 kubelet[2500]: E0706 23:56:29.125495 2500 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d"} Jul 6 23:56:29.125778 kubelet[2500]: E0706 23:56:29.125536 2500 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d8809a0-b8bb-4f42-8da7-d29046d2f152\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:29.125778 kubelet[2500]: E0706 23:56:29.125560 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d8809a0-b8bb-4f42-8da7-d29046d2f152\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b86b4658f-c279f" podUID="4d8809a0-b8bb-4f42-8da7-d29046d2f152" Jul 6 23:56:29.126112 containerd[1456]: time="2025-07-06T23:56:29.126061818Z" level=error msg="StopPodSandbox for \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\" failed" error="failed to destroy network for sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.126346 kubelet[2500]: E0706 23:56:29.126310 2500 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:29.126398 kubelet[2500]: E0706 23:56:29.126364 2500 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3"} Jul 6 23:56:29.126398 kubelet[2500]: E0706 23:56:29.126387 2500 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd93b232-dbe2-459a-a97f-dd73be2c49bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:29.126492 kubelet[2500]: E0706 23:56:29.126406 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd93b232-dbe2-459a-a97f-dd73be2c49bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-29zf2" podUID="dd93b232-dbe2-459a-a97f-dd73be2c49bc" Jul 6 23:56:29.127484 containerd[1456]: time="2025-07-06T23:56:29.127380813Z" level=error msg="StopPodSandbox for \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\" failed" error="failed to destroy network for sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.127566 kubelet[2500]: E0706 23:56:29.127527 2500 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:29.127566 kubelet[2500]: E0706 23:56:29.127550 2500 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee"} Jul 6 23:56:29.127664 kubelet[2500]: E0706 23:56:29.127572 2500 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e80de9bd-3003-481f-b571-cacd2045a049\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:29.127664 kubelet[2500]: E0706 23:56:29.127591 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e80de9bd-3003-481f-b571-cacd2045a049\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-zk8l4" podUID="e80de9bd-3003-481f-b571-cacd2045a049" Jul 6 23:56:29.130444 containerd[1456]: time="2025-07-06T23:56:29.130388613Z" level=error msg="StopPodSandbox for \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\" failed" error="failed to destroy network for sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.130537 kubelet[2500]: E0706 23:56:29.130508 2500 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:29.130574 kubelet[2500]: E0706 23:56:29.130539 2500 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967"} Jul 6 23:56:29.130574 kubelet[2500]: E0706 23:56:29.130558 2500 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a268ee2-aef1-470f-80a4-afd51b82bfec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:29.130642 kubelet[2500]: E0706 23:56:29.130577 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a268ee2-aef1-470f-80a4-afd51b82bfec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8666bcbf65-nclb4" podUID="6a268ee2-aef1-470f-80a4-afd51b82bfec" Jul 6 23:56:29.835079 systemd[1]: Created slice kubepods-besteffort-pod573e83ed_8e01_4333_9a22_d115fe0e7655.slice - libcontainer container kubepods-besteffort-pod573e83ed_8e01_4333_9a22_d115fe0e7655.slice. Jul 6 23:56:29.838031 containerd[1456]: time="2025-07-06T23:56:29.837977837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8jzdf,Uid:573e83ed-8e01-4333-9a22-d115fe0e7655,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:29.950790 containerd[1456]: time="2025-07-06T23:56:29.950696970Z" level=error msg="Failed to destroy network for sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.952832 containerd[1456]: time="2025-07-06T23:56:29.952637479Z" level=error msg="encountered an error cleaning up failed sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.953004 containerd[1456]: time="2025-07-06T23:56:29.952857502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8jzdf,Uid:573e83ed-8e01-4333-9a22-d115fe0e7655,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.953663 kubelet[2500]: E0706 23:56:29.953167 2500 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:29.953663 kubelet[2500]: E0706 23:56:29.953274 2500 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8jzdf" Jul 6 23:56:29.953663 kubelet[2500]: E0706 23:56:29.953297 2500 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8jzdf" Jul 6 23:56:29.953911 kubelet[2500]: E0706 23:56:29.953366 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8jzdf_calico-system(573e83ed-8e01-4333-9a22-d115fe0e7655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8jzdf_calico-system(573e83ed-8e01-4333-9a22-d115fe0e7655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8jzdf" podUID="573e83ed-8e01-4333-9a22-d115fe0e7655" Jul 6 23:56:29.957619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb-shm.mount: Deactivated successfully. Jul 6 23:56:30.070809 kubelet[2500]: I0706 23:56:30.070765 2500 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:30.071554 containerd[1456]: time="2025-07-06T23:56:30.071505286Z" level=info msg="StopPodSandbox for \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\"" Jul 6 23:56:30.072079 containerd[1456]: time="2025-07-06T23:56:30.071709428Z" level=info msg="Ensure that sandbox 36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb in task-service has been cleanup successfully" Jul 6 23:56:30.111646 containerd[1456]: time="2025-07-06T23:56:30.111462649Z" level=error msg="StopPodSandbox for \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\" failed" error="failed to destroy network for sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:56:30.111915 kubelet[2500]: E0706 23:56:30.111854 2500 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:30.112281 kubelet[2500]: E0706 23:56:30.111919 2500 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb"} Jul 6 23:56:30.112281 kubelet[2500]: E0706 23:56:30.111970 2500 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"573e83ed-8e01-4333-9a22-d115fe0e7655\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:56:30.112281 kubelet[2500]: E0706 23:56:30.112000 2500 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"573e83ed-8e01-4333-9a22-d115fe0e7655\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8jzdf" podUID="573e83ed-8e01-4333-9a22-d115fe0e7655" Jul 6 23:56:32.485412 systemd[1]: Started sshd@7-10.0.0.101:22-10.0.0.1:41566.service - OpenSSH per-connection server daemon (10.0.0.1:41566). Jul 6 23:56:32.529730 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 41566 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:32.531590 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:32.536502 systemd-logind[1443]: New session 8 of user core. Jul 6 23:56:32.544868 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:56:32.679752 sshd[3723]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:32.685082 systemd[1]: sshd@7-10.0.0.101:22-10.0.0.1:41566.service: Deactivated successfully. Jul 6 23:56:32.688056 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:56:32.691092 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:56:32.692707 systemd-logind[1443]: Removed session 8. Jul 6 23:56:36.845338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151324381.mount: Deactivated successfully. Jul 6 23:56:37.690613 systemd[1]: Started sshd@8-10.0.0.101:22-10.0.0.1:41574.service - OpenSSH per-connection server daemon (10.0.0.1:41574). Jul 6 23:56:37.751177 containerd[1456]: time="2025-07-06T23:56:37.751121122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:37.752276 containerd[1456]: time="2025-07-06T23:56:37.752234523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:56:37.754014 containerd[1456]: time="2025-07-06T23:56:37.753984094Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:37.756509 containerd[1456]: time="2025-07-06T23:56:37.756482148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:37.757213 containerd[1456]: time="2025-07-06T23:56:37.757187238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 9.728262406s" Jul 6 23:56:37.757256 containerd[1456]: time="2025-07-06T23:56:37.757220903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:56:37.767744 containerd[1456]: time="2025-07-06T23:56:37.767685785Z" level=info msg="CreateContainer within sandbox \"2284be865e3339dbd61747c76841538a3ea100266d9e3b73bdd3146c5cfb0e9c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:56:37.784114 sshd[3750]: Accepted publickey for core from 10.0.0.1 port 41574 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:37.786396 sshd[3750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:37.788724 containerd[1456]: time="2025-07-06T23:56:37.788678097Z" level=info msg="CreateContainer within sandbox \"2284be865e3339dbd61747c76841538a3ea100266d9e3b73bdd3146c5cfb0e9c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"75ba520ebabe471cffaa6a8961feb5b8877ba95910219f78a924754d853dd040\"" Jul 6 23:56:37.789342 containerd[1456]: time="2025-07-06T23:56:37.789244903Z" level=info msg="StartContainer for \"75ba520ebabe471cffaa6a8961feb5b8877ba95910219f78a924754d853dd040\"" Jul 6 23:56:37.793232 systemd-logind[1443]: New session 9 of user core. Jul 6 23:56:37.801216 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:56:37.854226 systemd[1]: Started cri-containerd-75ba520ebabe471cffaa6a8961feb5b8877ba95910219f78a924754d853dd040.scope - libcontainer container 75ba520ebabe471cffaa6a8961feb5b8877ba95910219f78a924754d853dd040. Jul 6 23:56:38.050117 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:56:38.050971 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:56:38.158467 containerd[1456]: time="2025-07-06T23:56:38.158177345Z" level=info msg="StartContainer for \"75ba520ebabe471cffaa6a8961feb5b8877ba95910219f78a924754d853dd040\" returns successfully" Jul 6 23:56:38.167061 sshd[3750]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:38.174841 systemd[1]: sshd@8-10.0.0.101:22-10.0.0.1:41574.service: Deactivated successfully. Jul 6 23:56:38.174981 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:56:38.179695 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:56:38.184340 systemd-logind[1443]: Removed session 9. Jul 6 23:56:38.195966 kubelet[2500]: I0706 23:56:38.195872 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4r4g2" podStartSLOduration=1.294352956 podStartE2EDuration="24.195850792s" podCreationTimestamp="2025-07-06 23:56:14 +0000 UTC" firstStartedPulling="2025-07-06 23:56:14.856303068 +0000 UTC m=+18.130337116" lastFinishedPulling="2025-07-06 23:56:37.757800904 +0000 UTC m=+41.031834952" observedRunningTime="2025-07-06 23:56:38.195785156 +0000 UTC m=+41.469819204" watchObservedRunningTime="2025-07-06 23:56:38.195850792 +0000 UTC m=+41.469884830" Jul 6 23:56:38.238725 containerd[1456]: time="2025-07-06T23:56:38.238646366Z" level=info msg="StopPodSandbox for \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\"" Jul 6 23:56:38.321898 systemd[1]: run-containerd-runc-k8s.io-75ba520ebabe471cffaa6a8961feb5b8877ba95910219f78a924754d853dd040-runc.Jip58c.mount: Deactivated successfully. Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.324 [INFO][3830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.326 [INFO][3830] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" iface="eth0" netns="/var/run/netns/cni-6bbdffc9-bbd7-4fcb-59cd-1eedb8a4af4b" Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.326 [INFO][3830] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" iface="eth0" netns="/var/run/netns/cni-6bbdffc9-bbd7-4fcb-59cd-1eedb8a4af4b" Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.327 [INFO][3830] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" iface="eth0" netns="/var/run/netns/cni-6bbdffc9-bbd7-4fcb-59cd-1eedb8a4af4b" Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.327 [INFO][3830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.327 [INFO][3830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.411 [INFO][3858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" HandleID="k8s-pod-network.b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Workload="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.412 [INFO][3858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.412 [INFO][3858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.421 [WARNING][3858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" HandleID="k8s-pod-network.b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Workload="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.421 [INFO][3858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" HandleID="k8s-pod-network.b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Workload="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.425 [INFO][3858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:38.432856 containerd[1456]: 2025-07-06 23:56:38.429 [INFO][3830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:38.433482 containerd[1456]: time="2025-07-06T23:56:38.433060619Z" level=info msg="TearDown network for sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\" successfully" Jul 6 23:56:38.433482 containerd[1456]: time="2025-07-06T23:56:38.433091890Z" level=info msg="StopPodSandbox for \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\" returns successfully" Jul 6 23:56:38.436587 systemd[1]: run-netns-cni\x2d6bbdffc9\x2dbbd7\x2d4fcb\x2d59cd\x2d1eedb8a4af4b.mount: Deactivated successfully. Jul 6 23:56:38.505349 kubelet[2500]: I0706 23:56:38.505272 2500 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a268ee2-aef1-470f-80a4-afd51b82bfec-whisker-backend-key-pair\") pod \"6a268ee2-aef1-470f-80a4-afd51b82bfec\" (UID: \"6a268ee2-aef1-470f-80a4-afd51b82bfec\") " Jul 6 23:56:38.505349 kubelet[2500]: I0706 23:56:38.505342 2500 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a268ee2-aef1-470f-80a4-afd51b82bfec-whisker-ca-bundle\") pod \"6a268ee2-aef1-470f-80a4-afd51b82bfec\" (UID: \"6a268ee2-aef1-470f-80a4-afd51b82bfec\") " Jul 6 23:56:38.505616 kubelet[2500]: I0706 23:56:38.505378 2500 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btmj2\" (UniqueName: \"kubernetes.io/projected/6a268ee2-aef1-470f-80a4-afd51b82bfec-kube-api-access-btmj2\") pod \"6a268ee2-aef1-470f-80a4-afd51b82bfec\" (UID: \"6a268ee2-aef1-470f-80a4-afd51b82bfec\") " Jul 6 23:56:38.510951 kubelet[2500]: I0706 23:56:38.510897 2500 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a268ee2-aef1-470f-80a4-afd51b82bfec-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6a268ee2-aef1-470f-80a4-afd51b82bfec" (UID: "6a268ee2-aef1-470f-80a4-afd51b82bfec"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 6 23:56:38.517740 kubelet[2500]: I0706 23:56:38.515838 2500 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a268ee2-aef1-470f-80a4-afd51b82bfec-kube-api-access-btmj2" (OuterVolumeSpecName: "kube-api-access-btmj2") pod "6a268ee2-aef1-470f-80a4-afd51b82bfec" (UID: "6a268ee2-aef1-470f-80a4-afd51b82bfec"). InnerVolumeSpecName "kube-api-access-btmj2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 6 23:56:38.517740 kubelet[2500]: I0706 23:56:38.515838 2500 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a268ee2-aef1-470f-80a4-afd51b82bfec-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6a268ee2-aef1-470f-80a4-afd51b82bfec" (UID: "6a268ee2-aef1-470f-80a4-afd51b82bfec"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 6 23:56:38.517660 systemd[1]: var-lib-kubelet-pods-6a268ee2\x2daef1\x2d470f\x2d80a4\x2dafd51b82bfec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbtmj2.mount: Deactivated successfully. Jul 6 23:56:38.518003 systemd[1]: var-lib-kubelet-pods-6a268ee2\x2daef1\x2d470f\x2d80a4\x2dafd51b82bfec-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:56:38.606110 kubelet[2500]: I0706 23:56:38.605932 2500 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a268ee2-aef1-470f-80a4-afd51b82bfec-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 6 23:56:38.606110 kubelet[2500]: I0706 23:56:38.605974 2500 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-btmj2\" (UniqueName: \"kubernetes.io/projected/6a268ee2-aef1-470f-80a4-afd51b82bfec-kube-api-access-btmj2\") on node \"localhost\" DevicePath \"\"" Jul 6 23:56:38.606110 kubelet[2500]: I0706 23:56:38.605990 2500 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a268ee2-aef1-470f-80a4-afd51b82bfec-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 6 23:56:38.837457 systemd[1]: Removed slice kubepods-besteffort-pod6a268ee2_aef1_470f_80a4_afd51b82bfec.slice - libcontainer container kubepods-besteffort-pod6a268ee2_aef1_470f_80a4_afd51b82bfec.slice. Jul 6 23:56:39.233223 systemd[1]: Created slice kubepods-besteffort-podfb02e9ec_63a2_46c5_9f81_e4c9b5192890.slice - libcontainer container kubepods-besteffort-podfb02e9ec_63a2_46c5_9f81_e4c9b5192890.slice. Jul 6 23:56:39.311374 kubelet[2500]: I0706 23:56:39.311304 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fb02e9ec-63a2-46c5-9f81-e4c9b5192890-whisker-backend-key-pair\") pod \"whisker-5489958cf9-jnrjz\" (UID: \"fb02e9ec-63a2-46c5-9f81-e4c9b5192890\") " pod="calico-system/whisker-5489958cf9-jnrjz" Jul 6 23:56:39.311374 kubelet[2500]: I0706 23:56:39.311365 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknx2\" (UniqueName: \"kubernetes.io/projected/fb02e9ec-63a2-46c5-9f81-e4c9b5192890-kube-api-access-hknx2\") pod \"whisker-5489958cf9-jnrjz\" (UID: \"fb02e9ec-63a2-46c5-9f81-e4c9b5192890\") " pod="calico-system/whisker-5489958cf9-jnrjz" Jul 6 23:56:39.311374 kubelet[2500]: I0706 23:56:39.311388 2500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb02e9ec-63a2-46c5-9f81-e4c9b5192890-whisker-ca-bundle\") pod \"whisker-5489958cf9-jnrjz\" (UID: \"fb02e9ec-63a2-46c5-9f81-e4c9b5192890\") " pod="calico-system/whisker-5489958cf9-jnrjz" Jul 6 23:56:39.536705 containerd[1456]: time="2025-07-06T23:56:39.536641947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5489958cf9-jnrjz,Uid:fb02e9ec-63a2-46c5-9f81-e4c9b5192890,Namespace:calico-system,Attempt:0,}" Jul 6 23:56:39.731818 kernel: bpftool[4050]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:56:39.824646 systemd-networkd[1380]: cali3ba910cb2d5: Link UP Jul 6 23:56:39.824975 systemd-networkd[1380]: cali3ba910cb2d5: Gained carrier Jul 6 23:56:39.830135 containerd[1456]: time="2025-07-06T23:56:39.830011570Z" level=info msg="StopPodSandbox for \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\"" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.748 [INFO][4026] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5489958cf9--jnrjz-eth0 whisker-5489958cf9- calico-system fb02e9ec-63a2-46c5-9f81-e4c9b5192890 982 0 2025-07-06 23:56:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5489958cf9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5489958cf9-jnrjz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3ba910cb2d5 [] [] }} ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Namespace="calico-system" Pod="whisker-5489958cf9-jnrjz" WorkloadEndpoint="localhost-k8s-whisker--5489958cf9--jnrjz-" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.748 [INFO][4026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Namespace="calico-system" Pod="whisker-5489958cf9-jnrjz" WorkloadEndpoint="localhost-k8s-whisker--5489958cf9--jnrjz-eth0" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.781 [INFO][4054] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" HandleID="k8s-pod-network.06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Workload="localhost-k8s-whisker--5489958cf9--jnrjz-eth0" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.781 [INFO][4054] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" HandleID="k8s-pod-network.06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Workload="localhost-k8s-whisker--5489958cf9--jnrjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5d40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5489958cf9-jnrjz", "timestamp":"2025-07-06 23:56:39.781435727 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.781 [INFO][4054] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.782 [INFO][4054] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.782 [INFO][4054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.788 [INFO][4054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" host="localhost" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.795 [INFO][4054] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.799 [INFO][4054] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.801 [INFO][4054] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.802 [INFO][4054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.802 [INFO][4054] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" host="localhost" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.804 [INFO][4054] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.807 [INFO][4054] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" host="localhost" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.813 [INFO][4054] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" host="localhost" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.813 [INFO][4054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" host="localhost" Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.813 [INFO][4054] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:39.844769 containerd[1456]: 2025-07-06 23:56:39.813 [INFO][4054] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" HandleID="k8s-pod-network.06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Workload="localhost-k8s-whisker--5489958cf9--jnrjz-eth0" Jul 6 23:56:39.845635 containerd[1456]: 2025-07-06 23:56:39.816 [INFO][4026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Namespace="calico-system" Pod="whisker-5489958cf9-jnrjz" WorkloadEndpoint="localhost-k8s-whisker--5489958cf9--jnrjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5489958cf9--jnrjz-eth0", GenerateName:"whisker-5489958cf9-", Namespace:"calico-system", SelfLink:"", UID:"fb02e9ec-63a2-46c5-9f81-e4c9b5192890", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5489958cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5489958cf9-jnrjz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3ba910cb2d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:39.845635 containerd[1456]: 2025-07-06 23:56:39.816 [INFO][4026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Namespace="calico-system" Pod="whisker-5489958cf9-jnrjz" WorkloadEndpoint="localhost-k8s-whisker--5489958cf9--jnrjz-eth0" Jul 6 23:56:39.845635 containerd[1456]: 2025-07-06 23:56:39.816 [INFO][4026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ba910cb2d5 ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Namespace="calico-system" Pod="whisker-5489958cf9-jnrjz" WorkloadEndpoint="localhost-k8s-whisker--5489958cf9--jnrjz-eth0" Jul 6 23:56:39.845635 containerd[1456]: 2025-07-06 23:56:39.825 [INFO][4026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Namespace="calico-system" Pod="whisker-5489958cf9-jnrjz" WorkloadEndpoint="localhost-k8s-whisker--5489958cf9--jnrjz-eth0" Jul 6 23:56:39.845635 containerd[1456]: 2025-07-06 23:56:39.825 [INFO][4026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Namespace="calico-system" Pod="whisker-5489958cf9-jnrjz" WorkloadEndpoint="localhost-k8s-whisker--5489958cf9--jnrjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5489958cf9--jnrjz-eth0", GenerateName:"whisker-5489958cf9-", Namespace:"calico-system", SelfLink:"", UID:"fb02e9ec-63a2-46c5-9f81-e4c9b5192890", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5489958cf9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f", Pod:"whisker-5489958cf9-jnrjz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3ba910cb2d5", MAC:"22:e8:0a:43:46:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:39.845635 containerd[1456]: 2025-07-06 23:56:39.838 [INFO][4026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f" Namespace="calico-system" Pod="whisker-5489958cf9-jnrjz" WorkloadEndpoint="localhost-k8s-whisker--5489958cf9--jnrjz-eth0" Jul 6 23:56:39.874981 containerd[1456]: time="2025-07-06T23:56:39.873175003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:39.874981 containerd[1456]: time="2025-07-06T23:56:39.873288822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:39.874981 containerd[1456]: time="2025-07-06T23:56:39.873303339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:39.874981 containerd[1456]: time="2025-07-06T23:56:39.873480298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:39.896942 systemd[1]: Started cri-containerd-06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f.scope - libcontainer container 06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f. Jul 6 23:56:39.913088 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.898 [INFO][4075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.898 [INFO][4075] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" iface="eth0" netns="/var/run/netns/cni-c5210359-3a83-e2e6-512b-4aa9ebddf33b" Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.898 [INFO][4075] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" iface="eth0" netns="/var/run/netns/cni-c5210359-3a83-e2e6-512b-4aa9ebddf33b" Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.898 [INFO][4075] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" iface="eth0" netns="/var/run/netns/cni-c5210359-3a83-e2e6-512b-4aa9ebddf33b" Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.901 [INFO][4075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.901 [INFO][4075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.930 [INFO][4120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" HandleID="k8s-pod-network.a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.930 [INFO][4120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.930 [INFO][4120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.935 [WARNING][4120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" HandleID="k8s-pod-network.a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.936 [INFO][4120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" HandleID="k8s-pod-network.a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.937 [INFO][4120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:39.944176 containerd[1456]: 2025-07-06 23:56:39.940 [INFO][4075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:39.944737 containerd[1456]: time="2025-07-06T23:56:39.944347216Z" level=info msg="TearDown network for sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\" successfully" Jul 6 23:56:39.944737 containerd[1456]: time="2025-07-06T23:56:39.944401119Z" level=info msg="StopPodSandbox for \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\" returns successfully" Jul 6 23:56:39.945821 containerd[1456]: time="2025-07-06T23:56:39.945771602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b86b4658f-c279f,Uid:4d8809a0-b8bb-4f42-8da7-d29046d2f152,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:39.954955 containerd[1456]: time="2025-07-06T23:56:39.954915563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5489958cf9-jnrjz,Uid:fb02e9ec-63a2-46c5-9f81-e4c9b5192890,Namespace:calico-system,Attempt:0,} returns sandbox id \"06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f\"" Jul 6 23:56:39.956938 containerd[1456]: time="2025-07-06T23:56:39.956907325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:56:40.047881 systemd-networkd[1380]: vxlan.calico: Link UP Jul 6 23:56:40.047894 systemd-networkd[1380]: vxlan.calico: Gained carrier Jul 6 23:56:40.079867 systemd-networkd[1380]: cali5aab3ab07c9: Link UP Jul 6 23:56:40.080058 systemd-networkd[1380]: cali5aab3ab07c9: Gained carrier Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:39.996 [INFO][4138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0 calico-kube-controllers-5b86b4658f- calico-system 4d8809a0-b8bb-4f42-8da7-d29046d2f152 990 0 2025-07-06 23:56:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b86b4658f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5b86b4658f-c279f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5aab3ab07c9 [] [] }} ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Namespace="calico-system" Pod="calico-kube-controllers-5b86b4658f-c279f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:39.996 [INFO][4138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Namespace="calico-system" Pod="calico-kube-controllers-5b86b4658f-c279f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.033 [INFO][4167] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" HandleID="k8s-pod-network.82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.034 [INFO][4167] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" HandleID="k8s-pod-network.82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5b86b4658f-c279f", "timestamp":"2025-07-06 23:56:40.033693155 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.034 [INFO][4167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.034 [INFO][4167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.034 [INFO][4167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.041 [INFO][4167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" host="localhost" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.045 [INFO][4167] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.052 [INFO][4167] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.054 [INFO][4167] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.057 [INFO][4167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.058 [INFO][4167] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" host="localhost" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.059 [INFO][4167] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88 Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.064 [INFO][4167] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" host="localhost" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.071 [INFO][4167] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" host="localhost" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.071 [INFO][4167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" host="localhost" Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.071 [INFO][4167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:40.097372 containerd[1456]: 2025-07-06 23:56:40.072 [INFO][4167] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" HandleID="k8s-pod-network.82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:40.098150 containerd[1456]: 2025-07-06 23:56:40.075 [INFO][4138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Namespace="calico-system" Pod="calico-kube-controllers-5b86b4658f-c279f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0", GenerateName:"calico-kube-controllers-5b86b4658f-", Namespace:"calico-system", SelfLink:"", UID:"4d8809a0-b8bb-4f42-8da7-d29046d2f152", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b86b4658f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5b86b4658f-c279f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aab3ab07c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:40.098150 containerd[1456]: 2025-07-06 23:56:40.076 [INFO][4138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Namespace="calico-system" Pod="calico-kube-controllers-5b86b4658f-c279f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:40.098150 containerd[1456]: 2025-07-06 23:56:40.076 [INFO][4138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5aab3ab07c9 ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Namespace="calico-system" Pod="calico-kube-controllers-5b86b4658f-c279f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:40.098150 containerd[1456]: 2025-07-06 23:56:40.080 [INFO][4138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Namespace="calico-system" Pod="calico-kube-controllers-5b86b4658f-c279f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:40.098150 containerd[1456]: 2025-07-06 23:56:40.080 [INFO][4138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Namespace="calico-system" Pod="calico-kube-controllers-5b86b4658f-c279f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0", GenerateName:"calico-kube-controllers-5b86b4658f-", Namespace:"calico-system", SelfLink:"", UID:"4d8809a0-b8bb-4f42-8da7-d29046d2f152", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b86b4658f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88", Pod:"calico-kube-controllers-5b86b4658f-c279f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aab3ab07c9", MAC:"52:d0:cb:c2:a5:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:40.098150 containerd[1456]: 2025-07-06 23:56:40.093 [INFO][4138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88" Namespace="calico-system" Pod="calico-kube-controllers-5b86b4658f-c279f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:40.119730 containerd[1456]: time="2025-07-06T23:56:40.119538896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:40.119730 containerd[1456]: time="2025-07-06T23:56:40.119615663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:40.119730 containerd[1456]: time="2025-07-06T23:56:40.119630892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.119945 containerd[1456]: time="2025-07-06T23:56:40.119736114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:40.137879 systemd[1]: Started cri-containerd-82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88.scope - libcontainer container 82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88. Jul 6 23:56:40.151691 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:56:40.180059 containerd[1456]: time="2025-07-06T23:56:40.180008923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b86b4658f-c279f,Uid:4d8809a0-b8bb-4f42-8da7-d29046d2f152,Namespace:calico-system,Attempt:1,} returns sandbox id \"82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88\"" Jul 6 23:56:40.192544 systemd[1]: run-netns-cni\x2dc5210359\x2d3a83\x2de2e6\x2d512b\x2d4aa9ebddf33b.mount: Deactivated successfully. Jul 6 23:56:40.830306 containerd[1456]: time="2025-07-06T23:56:40.829503458Z" level=info msg="StopPodSandbox for \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\"" Jul 6 23:56:40.832390 kubelet[2500]: I0706 23:56:40.832330 2500 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a268ee2-aef1-470f-80a4-afd51b82bfec" path="/var/lib/kubelet/pods/6a268ee2-aef1-470f-80a4-afd51b82bfec/volumes" Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.870 [INFO][4321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.870 [INFO][4321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" iface="eth0" netns="/var/run/netns/cni-3232a92b-d9dd-4964-7b26-dbfa6688dfd9" Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.871 [INFO][4321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" iface="eth0" netns="/var/run/netns/cni-3232a92b-d9dd-4964-7b26-dbfa6688dfd9" Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.871 [INFO][4321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" iface="eth0" netns="/var/run/netns/cni-3232a92b-d9dd-4964-7b26-dbfa6688dfd9" Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.871 [INFO][4321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.871 [INFO][4321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.889 [INFO][4330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" HandleID="k8s-pod-network.51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.889 [INFO][4330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.889 [INFO][4330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.895 [WARNING][4330] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" HandleID="k8s-pod-network.51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.895 [INFO][4330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" HandleID="k8s-pod-network.51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.896 [INFO][4330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:40.902389 containerd[1456]: 2025-07-06 23:56:40.899 [INFO][4321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:40.902929 containerd[1456]: time="2025-07-06T23:56:40.902578403Z" level=info msg="TearDown network for sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\" successfully" Jul 6 23:56:40.902929 containerd[1456]: time="2025-07-06T23:56:40.902619371Z" level=info msg="StopPodSandbox for \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\" returns successfully" Jul 6 23:56:40.904342 containerd[1456]: time="2025-07-06T23:56:40.904291992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf7666946-t2wmg,Uid:0aebe8d7-c736-40fd-a06c-2169cc2c7e1f,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:56:40.905980 systemd[1]: run-netns-cni\x2d3232a92b\x2dd9dd\x2d4964\x2d7b26\x2ddbfa6688dfd9.mount: Deactivated successfully. Jul 6 23:56:41.013362 systemd-networkd[1380]: cali4c05b9f21fb: Link UP Jul 6 23:56:41.014675 systemd-networkd[1380]: cali4c05b9f21fb: Gained carrier Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.953 [INFO][4339] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0 calico-apiserver-5cf7666946- calico-apiserver 0aebe8d7-c736-40fd-a06c-2169cc2c7e1f 1000 0 2025-07-06 23:56:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cf7666946 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5cf7666946-t2wmg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c05b9f21fb [] [] }} ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-t2wmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.954 [INFO][4339] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-t2wmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.980 [INFO][4352] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" HandleID="k8s-pod-network.f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.980 [INFO][4352] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" HandleID="k8s-pod-network.f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5cf7666946-t2wmg", "timestamp":"2025-07-06 23:56:40.980154531 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.980 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.980 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.980 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.986 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" host="localhost" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.990 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.994 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.995 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.997 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.997 [INFO][4352] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" host="localhost" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:40.998 [INFO][4352] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9 Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:41.002 [INFO][4352] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" host="localhost" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:41.007 [INFO][4352] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" host="localhost" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:41.007 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" host="localhost" Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:41.007 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:41.031276 containerd[1456]: 2025-07-06 23:56:41.007 [INFO][4352] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" HandleID="k8s-pod-network.f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:41.032053 containerd[1456]: 2025-07-06 23:56:41.010 [INFO][4339] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-t2wmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0", GenerateName:"calico-apiserver-5cf7666946-", Namespace:"calico-apiserver", SelfLink:"", UID:"0aebe8d7-c736-40fd-a06c-2169cc2c7e1f", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf7666946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5cf7666946-t2wmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c05b9f21fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:41.032053 containerd[1456]: 2025-07-06 23:56:41.010 [INFO][4339] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-t2wmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:41.032053 containerd[1456]: 2025-07-06 23:56:41.010 [INFO][4339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c05b9f21fb ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-t2wmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:41.032053 containerd[1456]: 2025-07-06 23:56:41.015 [INFO][4339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-t2wmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:41.032053 containerd[1456]: 2025-07-06 23:56:41.015 [INFO][4339] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-t2wmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0", GenerateName:"calico-apiserver-5cf7666946-", Namespace:"calico-apiserver", SelfLink:"", UID:"0aebe8d7-c736-40fd-a06c-2169cc2c7e1f", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf7666946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9", Pod:"calico-apiserver-5cf7666946-t2wmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c05b9f21fb", MAC:"8a:c1:4b:e8:09:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:41.032053 containerd[1456]: 2025-07-06 23:56:41.026 [INFO][4339] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-t2wmg" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:41.053235 containerd[1456]: time="2025-07-06T23:56:41.052935758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:41.053235 containerd[1456]: time="2025-07-06T23:56:41.053008708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:41.053235 containerd[1456]: time="2025-07-06T23:56:41.053027324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:41.053235 containerd[1456]: time="2025-07-06T23:56:41.053117176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:41.075885 systemd[1]: Started cri-containerd-f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9.scope - libcontainer container f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9. Jul 6 23:56:41.090078 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:56:41.119076 containerd[1456]: time="2025-07-06T23:56:41.118881510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf7666946-t2wmg,Uid:0aebe8d7-c736-40fd-a06c-2169cc2c7e1f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9\"" Jul 6 23:56:41.177066 systemd-networkd[1380]: cali5aab3ab07c9: Gained IPv6LL Jul 6 23:56:41.304863 systemd-networkd[1380]: cali3ba910cb2d5: Gained IPv6LL Jul 6 23:56:41.434268 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Jul 6 23:56:41.491904 containerd[1456]: time="2025-07-06T23:56:41.491837628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:41.492584 containerd[1456]: time="2025-07-06T23:56:41.492542016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:56:41.493720 containerd[1456]: time="2025-07-06T23:56:41.493691705Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:41.496059 containerd[1456]: time="2025-07-06T23:56:41.496024217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:41.496907 containerd[1456]: time="2025-07-06T23:56:41.496870576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.539928475s" Jul 6 23:56:41.496944 containerd[1456]: time="2025-07-06T23:56:41.496909201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:56:41.497985 containerd[1456]: time="2025-07-06T23:56:41.497938118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:56:41.498856 containerd[1456]: time="2025-07-06T23:56:41.498816388Z" level=info msg="CreateContainer within sandbox \"06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:56:41.513708 containerd[1456]: time="2025-07-06T23:56:41.513667038Z" level=info msg="CreateContainer within sandbox \"06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"79783d5793584674bf08e207e8c245542ea1464eee6229a5467095db19b405dc\"" Jul 6 23:56:41.514444 containerd[1456]: time="2025-07-06T23:56:41.514420589Z" level=info msg="StartContainer for \"79783d5793584674bf08e207e8c245542ea1464eee6229a5467095db19b405dc\"" Jul 6 23:56:41.548862 systemd[1]: Started cri-containerd-79783d5793584674bf08e207e8c245542ea1464eee6229a5467095db19b405dc.scope - libcontainer container 79783d5793584674bf08e207e8c245542ea1464eee6229a5467095db19b405dc. Jul 6 23:56:41.587957 containerd[1456]: time="2025-07-06T23:56:41.587851679Z" level=info msg="StartContainer for \"79783d5793584674bf08e207e8c245542ea1464eee6229a5467095db19b405dc\" returns successfully" Jul 6 23:56:42.648918 systemd-networkd[1380]: cali4c05b9f21fb: Gained IPv6LL Jul 6 23:56:42.829534 containerd[1456]: time="2025-07-06T23:56:42.829475429Z" level=info msg="StopPodSandbox for \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\"" Jul 6 23:56:42.829534 containerd[1456]: time="2025-07-06T23:56:42.829512189Z" level=info msg="StopPodSandbox for \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\"" Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.882 [INFO][4475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.883 [INFO][4475] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" iface="eth0" netns="/var/run/netns/cni-1eaf47e0-096b-8caf-5dac-c4f6ce9cf061" Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.883 [INFO][4475] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" iface="eth0" netns="/var/run/netns/cni-1eaf47e0-096b-8caf-5dac-c4f6ce9cf061" Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.883 [INFO][4475] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" iface="eth0" netns="/var/run/netns/cni-1eaf47e0-096b-8caf-5dac-c4f6ce9cf061" Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.883 [INFO][4475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.883 [INFO][4475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.914 [INFO][4491] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" HandleID="k8s-pod-network.ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.914 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.914 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.921 [WARNING][4491] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" HandleID="k8s-pod-network.ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.921 [INFO][4491] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" HandleID="k8s-pod-network.ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.923 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:42.929074 containerd[1456]: 2025-07-06 23:56:42.926 [INFO][4475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:42.931651 containerd[1456]: time="2025-07-06T23:56:42.931596955Z" level=info msg="TearDown network for sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\" successfully" Jul 6 23:56:42.931651 containerd[1456]: time="2025-07-06T23:56:42.931650197Z" level=info msg="StopPodSandbox for \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\" returns successfully" Jul 6 23:56:42.932366 kubelet[2500]: E0706 23:56:42.932091 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:42.932818 containerd[1456]: time="2025-07-06T23:56:42.932607999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-29zf2,Uid:dd93b232-dbe2-459a-a97f-dd73be2c49bc,Namespace:kube-system,Attempt:1,}" Jul 6 23:56:42.935304 systemd[1]: run-netns-cni\x2d1eaf47e0\x2d096b\x2d8caf\x2d5dac\x2dc4f6ce9cf061.mount: Deactivated successfully. Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.889 [INFO][4474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.889 [INFO][4474] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" iface="eth0" netns="/var/run/netns/cni-6870802b-9c4e-5224-293e-aba9f295e9c1" Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.890 [INFO][4474] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" iface="eth0" netns="/var/run/netns/cni-6870802b-9c4e-5224-293e-aba9f295e9c1" Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.890 [INFO][4474] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" iface="eth0" netns="/var/run/netns/cni-6870802b-9c4e-5224-293e-aba9f295e9c1" Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.890 [INFO][4474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.890 [INFO][4474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.925 [INFO][4498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" HandleID="k8s-pod-network.36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.925 [INFO][4498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.925 [INFO][4498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.933 [WARNING][4498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" HandleID="k8s-pod-network.36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.933 [INFO][4498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" HandleID="k8s-pod-network.36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.936 [INFO][4498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:42.944652 containerd[1456]: 2025-07-06 23:56:42.939 [INFO][4474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:42.945211 containerd[1456]: time="2025-07-06T23:56:42.944852536Z" level=info msg="TearDown network for sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\" successfully" Jul 6 23:56:42.945211 containerd[1456]: time="2025-07-06T23:56:42.944884498Z" level=info msg="StopPodSandbox for \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\" returns successfully" Jul 6 23:56:42.946170 containerd[1456]: time="2025-07-06T23:56:42.946135069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8jzdf,Uid:573e83ed-8e01-4333-9a22-d115fe0e7655,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:42.948980 systemd[1]: run-netns-cni\x2d6870802b\x2d9c4e\x2d5224\x2d293e\x2daba9f295e9c1.mount: Deactivated successfully. Jul 6 23:56:43.192995 systemd[1]: Started sshd@9-10.0.0.101:22-10.0.0.1:50224.service - OpenSSH per-connection server daemon (10.0.0.1:50224). Jul 6 23:56:43.253386 sshd[4556]: Accepted publickey for core from 10.0.0.1 port 50224 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:43.255423 sshd[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:43.260181 systemd-logind[1443]: New session 10 of user core. Jul 6 23:56:43.266852 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:56:43.444165 systemd-networkd[1380]: cali82bb0618dbd: Link UP Jul 6 23:56:43.445484 systemd-networkd[1380]: cali82bb0618dbd: Gained carrier Jul 6 23:56:43.448091 sshd[4556]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:43.453950 systemd[1]: sshd@9-10.0.0.101:22-10.0.0.1:50224.service: Deactivated successfully. Jul 6 23:56:43.459299 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:42.994 [INFO][4509] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--29zf2-eth0 coredns-668d6bf9bc- kube-system dd93b232-dbe2-459a-a97f-dd73be2c49bc 1020 0 2025-07-06 23:56:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-29zf2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali82bb0618dbd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-29zf2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--29zf2-" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:42.994 [INFO][4509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-29zf2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.032 [INFO][4538] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" HandleID="k8s-pod-network.b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.032 [INFO][4538] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" HandleID="k8s-pod-network.b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aae20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-29zf2", "timestamp":"2025-07-06 23:56:43.032673149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.032 [INFO][4538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.033 [INFO][4538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.033 [INFO][4538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.135 [INFO][4538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" host="localhost" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.414 [INFO][4538] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.419 [INFO][4538] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.421 [INFO][4538] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.424 [INFO][4538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.424 [INFO][4538] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" host="localhost" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.426 [INFO][4538] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2 Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.431 [INFO][4538] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" host="localhost" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.437 [INFO][4538] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" host="localhost" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.437 [INFO][4538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" host="localhost" Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.437 [INFO][4538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:43.459653 containerd[1456]: 2025-07-06 23:56:43.437 [INFO][4538] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" HandleID="k8s-pod-network.b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:43.460623 containerd[1456]: 2025-07-06 23:56:43.440 [INFO][4509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-29zf2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--29zf2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dd93b232-dbe2-459a-a97f-dd73be2c49bc", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-29zf2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82bb0618dbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:43.460623 containerd[1456]: 2025-07-06 23:56:43.440 [INFO][4509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-29zf2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:43.460623 containerd[1456]: 2025-07-06 23:56:43.440 [INFO][4509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82bb0618dbd ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-29zf2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:43.460623 containerd[1456]: 2025-07-06 23:56:43.446 [INFO][4509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-29zf2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:43.460623 containerd[1456]: 2025-07-06 23:56:43.446 [INFO][4509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-29zf2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--29zf2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dd93b232-dbe2-459a-a97f-dd73be2c49bc", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2", Pod:"coredns-668d6bf9bc-29zf2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82bb0618dbd", MAC:"da:6d:4a:19:0e:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:43.460623 containerd[1456]: 2025-07-06 23:56:43.455 [INFO][4509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2" Namespace="kube-system" Pod="coredns-668d6bf9bc-29zf2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:43.461389 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:56:43.463183 systemd-logind[1443]: Removed session 10. Jul 6 23:56:43.485583 containerd[1456]: time="2025-07-06T23:56:43.484877811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:43.485583 containerd[1456]: time="2025-07-06T23:56:43.485561208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:43.485583 containerd[1456]: time="2025-07-06T23:56:43.485574954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:43.485796 containerd[1456]: time="2025-07-06T23:56:43.485672671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:43.508893 systemd[1]: Started cri-containerd-b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2.scope - libcontainer container b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2. Jul 6 23:56:43.523076 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:56:43.550593 systemd-networkd[1380]: cali1a29a2145d8: Link UP Jul 6 23:56:43.551819 systemd-networkd[1380]: cali1a29a2145d8: Gained carrier Jul 6 23:56:43.564306 containerd[1456]: time="2025-07-06T23:56:43.564264203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-29zf2,Uid:dd93b232-dbe2-459a-a97f-dd73be2c49bc,Namespace:kube-system,Attempt:1,} returns sandbox id \"b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2\"" Jul 6 23:56:43.566935 kubelet[2500]: E0706 23:56:43.566579 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:43.569854 containerd[1456]: time="2025-07-06T23:56:43.569772845Z" level=info msg="CreateContainer within sandbox \"b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.007 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8jzdf-eth0 csi-node-driver- calico-system 573e83ed-8e01-4333-9a22-d115fe0e7655 1021 0 2025-07-06 23:56:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8jzdf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1a29a2145d8 [] [] }} ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Namespace="calico-system" Pod="csi-node-driver-8jzdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--8jzdf-" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.007 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Namespace="calico-system" Pod="csi-node-driver-8jzdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.042 [INFO][4547] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" HandleID="k8s-pod-network.a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.042 [INFO][4547] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" HandleID="k8s-pod-network.a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8jzdf", "timestamp":"2025-07-06 23:56:43.042280078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.042 [INFO][4547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.437 [INFO][4547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.437 [INFO][4547] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.444 [INFO][4547] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" host="localhost" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.517 [INFO][4547] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.525 [INFO][4547] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.527 [INFO][4547] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.529 [INFO][4547] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.529 [INFO][4547] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" host="localhost" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.531 [INFO][4547] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.534 [INFO][4547] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" host="localhost" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.542 [INFO][4547] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" host="localhost" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.542 [INFO][4547] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" host="localhost" Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.542 [INFO][4547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:43.571806 containerd[1456]: 2025-07-06 23:56:43.542 [INFO][4547] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" HandleID="k8s-pod-network.a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:43.572437 containerd[1456]: 2025-07-06 23:56:43.547 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Namespace="calico-system" Pod="csi-node-driver-8jzdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--8jzdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8jzdf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"573e83ed-8e01-4333-9a22-d115fe0e7655", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8jzdf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a29a2145d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:43.572437 containerd[1456]: 2025-07-06 23:56:43.547 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Namespace="calico-system" Pod="csi-node-driver-8jzdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:43.572437 containerd[1456]: 2025-07-06 23:56:43.547 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a29a2145d8 ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Namespace="calico-system" Pod="csi-node-driver-8jzdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:43.572437 containerd[1456]: 2025-07-06 23:56:43.552 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Namespace="calico-system" Pod="csi-node-driver-8jzdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:43.572437 containerd[1456]: 2025-07-06 23:56:43.553 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Namespace="calico-system" Pod="csi-node-driver-8jzdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--8jzdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8jzdf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"573e83ed-8e01-4333-9a22-d115fe0e7655", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f", Pod:"csi-node-driver-8jzdf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a29a2145d8", MAC:"96:60:ce:1b:52:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:43.572437 containerd[1456]: 2025-07-06 23:56:43.565 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f" Namespace="calico-system" Pod="csi-node-driver-8jzdf" WorkloadEndpoint="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:43.771417 containerd[1456]: time="2025-07-06T23:56:43.771172073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:43.771417 containerd[1456]: time="2025-07-06T23:56:43.771240995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:43.771417 containerd[1456]: time="2025-07-06T23:56:43.771255122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:43.771417 containerd[1456]: time="2025-07-06T23:56:43.771342318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:43.784841 containerd[1456]: time="2025-07-06T23:56:43.784573087Z" level=info msg="CreateContainer within sandbox \"b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5eeea4c7eda78f2c3bbcfa7017e136237e16928107e7554b53cd6c9cde78873\"" Jul 6 23:56:43.786922 containerd[1456]: time="2025-07-06T23:56:43.786771691Z" level=info msg="StartContainer for \"f5eeea4c7eda78f2c3bbcfa7017e136237e16928107e7554b53cd6c9cde78873\"" Jul 6 23:56:43.793839 systemd[1]: Started cri-containerd-a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f.scope - libcontainer container a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f. Jul 6 23:56:43.817628 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:56:43.830280 containerd[1456]: time="2025-07-06T23:56:43.830230957Z" level=info msg="StopPodSandbox for \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\"" Jul 6 23:56:43.832666 containerd[1456]: time="2025-07-06T23:56:43.830657934Z" level=info msg="StopPodSandbox for \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\"" Jul 6 23:56:43.831188 systemd[1]: Started cri-containerd-f5eeea4c7eda78f2c3bbcfa7017e136237e16928107e7554b53cd6c9cde78873.scope - libcontainer container f5eeea4c7eda78f2c3bbcfa7017e136237e16928107e7554b53cd6c9cde78873. Jul 6 23:56:43.846130 containerd[1456]: time="2025-07-06T23:56:43.846077477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8jzdf,Uid:573e83ed-8e01-4333-9a22-d115fe0e7655,Namespace:calico-system,Attempt:1,} returns sandbox id \"a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f\"" Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.060 [INFO][4719] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.061 [INFO][4719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" iface="eth0" netns="/var/run/netns/cni-4ae3f80b-e368-2132-31f9-928148896c8c" Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.062 [INFO][4719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" iface="eth0" netns="/var/run/netns/cni-4ae3f80b-e368-2132-31f9-928148896c8c" Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.062 [INFO][4719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" iface="eth0" netns="/var/run/netns/cni-4ae3f80b-e368-2132-31f9-928148896c8c" Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.062 [INFO][4719] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.062 [INFO][4719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.085 [INFO][4747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" HandleID="k8s-pod-network.c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.086 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.086 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.091 [WARNING][4747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" HandleID="k8s-pod-network.c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.091 [INFO][4747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" HandleID="k8s-pod-network.c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.092 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:44.099811 containerd[1456]: 2025-07-06 23:56:44.096 [INFO][4719] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:44.103875 containerd[1456]: time="2025-07-06T23:56:44.103838059Z" level=info msg="TearDown network for sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\" successfully" Jul 6 23:56:44.103968 containerd[1456]: time="2025-07-06T23:56:44.103951515Z" level=info msg="StopPodSandbox for \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\" returns successfully" Jul 6 23:56:44.104324 systemd[1]: run-netns-cni\x2d4ae3f80b\x2de368\x2d2132\x2d31f9\x2d928148896c8c.mount: Deactivated successfully. Jul 6 23:56:44.105465 containerd[1456]: time="2025-07-06T23:56:44.104855574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf7666946-9ld94,Uid:0911fce1-3f9f-4337-b200-a55b72bf320f,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.066 [INFO][4729] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.066 [INFO][4729] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" iface="eth0" netns="/var/run/netns/cni-7598a34c-3662-c520-6ef4-46cf4de551bf" Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.067 [INFO][4729] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" iface="eth0" netns="/var/run/netns/cni-7598a34c-3662-c520-6ef4-46cf4de551bf" Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.067 [INFO][4729] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" iface="eth0" netns="/var/run/netns/cni-7598a34c-3662-c520-6ef4-46cf4de551bf" Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.067 [INFO][4729] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.067 [INFO][4729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.101 [INFO][4753] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" HandleID="k8s-pod-network.354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.101 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.103 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.110 [WARNING][4753] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" HandleID="k8s-pod-network.354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.111 [INFO][4753] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" HandleID="k8s-pod-network.354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.114 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:44.120914 containerd[1456]: 2025-07-06 23:56:44.118 [INFO][4729] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:44.121673 containerd[1456]: time="2025-07-06T23:56:44.121629780Z" level=info msg="TearDown network for sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\" successfully" Jul 6 23:56:44.122249 containerd[1456]: time="2025-07-06T23:56:44.122218395Z" level=info msg="StopPodSandbox for \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\" returns successfully" Jul 6 23:56:44.123654 kubelet[2500]: E0706 23:56:44.123326 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:44.124647 containerd[1456]: time="2025-07-06T23:56:44.123925248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rlpb,Uid:580e7206-665f-4270-aab2-39eaf9dc4990,Namespace:kube-system,Attempt:1,}" Jul 6 23:56:44.125042 systemd[1]: run-netns-cni\x2d7598a34c\x2d3662\x2dc520\x2d6ef4\x2d46cf4de551bf.mount: Deactivated successfully. Jul 6 23:56:44.317227 containerd[1456]: time="2025-07-06T23:56:44.317157461Z" level=info msg="StartContainer for \"f5eeea4c7eda78f2c3bbcfa7017e136237e16928107e7554b53cd6c9cde78873\" returns successfully" Jul 6 23:56:44.320607 kubelet[2500]: E0706 23:56:44.320555 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:44.554987 kubelet[2500]: I0706 23:56:44.554833 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-29zf2" podStartSLOduration=41.554808217 podStartE2EDuration="41.554808217s" podCreationTimestamp="2025-07-06 23:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:44.554596762 +0000 UTC m=+47.828630810" watchObservedRunningTime="2025-07-06 23:56:44.554808217 +0000 UTC m=+47.828842265" Jul 6 23:56:44.620318 containerd[1456]: time="2025-07-06T23:56:44.620083761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:44.634050 containerd[1456]: time="2025-07-06T23:56:44.633602154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 6 23:56:44.696906 systemd-networkd[1380]: cali1a29a2145d8: Gained IPv6LL Jul 6 23:56:44.749244 containerd[1456]: time="2025-07-06T23:56:44.749196311Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:44.754563 containerd[1456]: time="2025-07-06T23:56:44.754504458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:44.755383 containerd[1456]: time="2025-07-06T23:56:44.755326709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.257349056s" Jul 6 23:56:44.755446 containerd[1456]: time="2025-07-06T23:56:44.755387155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 6 23:56:44.759208 containerd[1456]: time="2025-07-06T23:56:44.759186949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:56:44.768243 containerd[1456]: time="2025-07-06T23:56:44.768075120Z" level=info msg="CreateContainer within sandbox \"82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:56:44.797020 containerd[1456]: time="2025-07-06T23:56:44.796966940Z" level=info msg="CreateContainer within sandbox \"82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"06c9c0cac1f75528787b4dd9a27434a52b62de7be575aa524f2af4abd09fe4c7\"" Jul 6 23:56:44.799032 containerd[1456]: time="2025-07-06T23:56:44.798927229Z" level=info msg="StartContainer for \"06c9c0cac1f75528787b4dd9a27434a52b62de7be575aa524f2af4abd09fe4c7\"" Jul 6 23:56:44.842460 containerd[1456]: time="2025-07-06T23:56:44.841001198Z" level=info msg="StopPodSandbox for \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\"" Jul 6 23:56:44.871932 systemd[1]: Started cri-containerd-06c9c0cac1f75528787b4dd9a27434a52b62de7be575aa524f2af4abd09fe4c7.scope - libcontainer container 06c9c0cac1f75528787b4dd9a27434a52b62de7be575aa524f2af4abd09fe4c7. Jul 6 23:56:44.900595 systemd-networkd[1380]: cali24164d1aa95: Link UP Jul 6 23:56:44.901676 systemd-networkd[1380]: cali24164d1aa95: Gained carrier Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.788 [INFO][4770] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0 calico-apiserver-5cf7666946- calico-apiserver 0911fce1-3f9f-4337-b200-a55b72bf320f 1040 0 2025-07-06 23:56:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cf7666946 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5cf7666946-9ld94 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali24164d1aa95 [] [] }} ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-9ld94" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--9ld94-" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.789 [INFO][4770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-9ld94" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.839 [INFO][4806] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" HandleID="k8s-pod-network.37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.841 [INFO][4806] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" HandleID="k8s-pod-network.37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ce830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5cf7666946-9ld94", "timestamp":"2025-07-06 23:56:44.839629195 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.841 [INFO][4806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.841 [INFO][4806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.841 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.848 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" host="localhost" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.861 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.868 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.870 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.873 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.873 [INFO][4806] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" host="localhost" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.874 [INFO][4806] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38 Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.880 [INFO][4806] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" host="localhost" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.889 [INFO][4806] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" host="localhost" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.889 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" host="localhost" Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.889 [INFO][4806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:44.920571 containerd[1456]: 2025-07-06 23:56:44.889 [INFO][4806] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" HandleID="k8s-pod-network.37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:44.921381 containerd[1456]: 2025-07-06 23:56:44.897 [INFO][4770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-9ld94" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0", GenerateName:"calico-apiserver-5cf7666946-", Namespace:"calico-apiserver", SelfLink:"", UID:"0911fce1-3f9f-4337-b200-a55b72bf320f", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf7666946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5cf7666946-9ld94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24164d1aa95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:44.921381 containerd[1456]: 2025-07-06 23:56:44.897 [INFO][4770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-9ld94" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:44.921381 containerd[1456]: 2025-07-06 23:56:44.898 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24164d1aa95 ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-9ld94" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:44.921381 containerd[1456]: 2025-07-06 23:56:44.902 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-9ld94" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:44.921381 containerd[1456]: 2025-07-06 23:56:44.902 [INFO][4770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-9ld94" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0", GenerateName:"calico-apiserver-5cf7666946-", Namespace:"calico-apiserver", SelfLink:"", UID:"0911fce1-3f9f-4337-b200-a55b72bf320f", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf7666946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38", Pod:"calico-apiserver-5cf7666946-9ld94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24164d1aa95", MAC:"fe:27:7c:d6:12:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:44.921381 containerd[1456]: 2025-07-06 23:56:44.916 [INFO][4770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38" Namespace="calico-apiserver" Pod="calico-apiserver-5cf7666946-9ld94" WorkloadEndpoint="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:45.016955 systemd-networkd[1380]: cali82bb0618dbd: Gained IPv6LL Jul 6 23:56:45.323143 containerd[1456]: time="2025-07-06T23:56:45.322789227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:45.323143 containerd[1456]: time="2025-07-06T23:56:45.322854543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:45.323143 containerd[1456]: time="2025-07-06T23:56:45.322866575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:45.323143 containerd[1456]: time="2025-07-06T23:56:45.322954944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:45.344137 containerd[1456]: time="2025-07-06T23:56:45.344081422Z" level=info msg="StartContainer for \"06c9c0cac1f75528787b4dd9a27434a52b62de7be575aa524f2af4abd09fe4c7\" returns successfully" Jul 6 23:56:45.349758 systemd-networkd[1380]: calibba0bcb807b: Link UP Jul 6 23:56:45.352867 kubelet[2500]: E0706 23:56:45.352251 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:45.352416 systemd-networkd[1380]: calibba0bcb807b: Gained carrier Jul 6 23:56:45.373977 systemd[1]: Started cri-containerd-37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38.scope - libcontainer container 37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38. Jul 6 23:56:45.390010 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:56:45.439119 containerd[1456]: time="2025-07-06T23:56:45.439056887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf7666946-9ld94,Uid:0911fce1-3f9f-4337-b200-a55b72bf320f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38\"" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:44.804 [INFO][4782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0 coredns-668d6bf9bc- kube-system 580e7206-665f-4270-aab2-39eaf9dc4990 1041 0 2025-07-06 23:56:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4rlpb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibba0bcb807b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rlpb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4rlpb-" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:44.804 [INFO][4782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rlpb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:44.846 [INFO][4816] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" HandleID="k8s-pod-network.5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:44.846 [INFO][4816] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" HandleID="k8s-pod-network.5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4rlpb", "timestamp":"2025-07-06 23:56:44.846042206 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:44.846 [INFO][4816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:44.889 [INFO][4816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:44.889 [INFO][4816] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:44.949 [INFO][4816] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" host="localhost" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.121 [INFO][4816] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.155 [INFO][4816] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.187 [INFO][4816] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.190 [INFO][4816] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.190 [INFO][4816] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" host="localhost" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.191 [INFO][4816] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0 Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.202 [INFO][4816] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" host="localhost" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.329 [INFO][4816] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" host="localhost" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.329 [INFO][4816] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" host="localhost" Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.329 [INFO][4816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:45.478334 containerd[1456]: 2025-07-06 23:56:45.329 [INFO][4816] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" HandleID="k8s-pod-network.5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:45.479524 containerd[1456]: 2025-07-06 23:56:45.341 [INFO][4782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rlpb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"580e7206-665f-4270-aab2-39eaf9dc4990", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4rlpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibba0bcb807b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:45.479524 containerd[1456]: 2025-07-06 23:56:45.343 [INFO][4782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rlpb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:45.479524 containerd[1456]: 2025-07-06 23:56:45.344 [INFO][4782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibba0bcb807b ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rlpb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:45.479524 containerd[1456]: 2025-07-06 23:56:45.350 [INFO][4782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rlpb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:45.479524 containerd[1456]: 2025-07-06 23:56:45.351 [INFO][4782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rlpb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"580e7206-665f-4270-aab2-39eaf9dc4990", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0", Pod:"coredns-668d6bf9bc-4rlpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibba0bcb807b", MAC:"7e:ab:73:08:b8:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:45.479524 containerd[1456]: 2025-07-06 23:56:45.472 [INFO][4782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0" Namespace="kube-system" Pod="coredns-668d6bf9bc-4rlpb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:44.913 [INFO][4851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:44.914 [INFO][4851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" iface="eth0" netns="/var/run/netns/cni-00e0af2b-4439-267e-d75b-2163f5f821db" Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:44.917 [INFO][4851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" iface="eth0" netns="/var/run/netns/cni-00e0af2b-4439-267e-d75b-2163f5f821db" Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:44.917 [INFO][4851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" iface="eth0" netns="/var/run/netns/cni-00e0af2b-4439-267e-d75b-2163f5f821db" Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:44.917 [INFO][4851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:44.918 [INFO][4851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:45.200 [INFO][4868] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" HandleID="k8s-pod-network.7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:45.201 [INFO][4868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:45.329 [INFO][4868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:45.409 [WARNING][4868] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" HandleID="k8s-pod-network.7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:45.409 [INFO][4868] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" HandleID="k8s-pod-network.7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:45.469 [INFO][4868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:45.481386 containerd[1456]: 2025-07-06 23:56:45.477 [INFO][4851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:45.484129 containerd[1456]: time="2025-07-06T23:56:45.483986338Z" level=info msg="TearDown network for sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\" successfully" Jul 6 23:56:45.484129 containerd[1456]: time="2025-07-06T23:56:45.484017850Z" level=info msg="StopPodSandbox for \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\" returns successfully" Jul 6 23:56:45.486805 containerd[1456]: time="2025-07-06T23:56:45.486774829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-zk8l4,Uid:e80de9bd-3003-481f-b571-cacd2045a049,Namespace:calico-system,Attempt:1,}" Jul 6 23:56:45.488027 systemd[1]: run-netns-cni\x2d00e0af2b\x2d4439\x2d267e\x2dd75b\x2d2163f5f821db.mount: Deactivated successfully. Jul 6 23:56:45.567132 kubelet[2500]: I0706 23:56:45.567038 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5b86b4658f-c279f" podStartSLOduration=26.989821518 podStartE2EDuration="31.567014388s" podCreationTimestamp="2025-07-06 23:56:14 +0000 UTC" firstStartedPulling="2025-07-06 23:56:40.18130931 +0000 UTC m=+43.455343358" lastFinishedPulling="2025-07-06 23:56:44.75850218 +0000 UTC m=+48.032536228" observedRunningTime="2025-07-06 23:56:45.50794984 +0000 UTC m=+48.781983888" watchObservedRunningTime="2025-07-06 23:56:45.567014388 +0000 UTC m=+48.841048436" Jul 6 23:56:45.568956 containerd[1456]: time="2025-07-06T23:56:45.568240121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:45.568956 containerd[1456]: time="2025-07-06T23:56:45.568747500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:45.568956 containerd[1456]: time="2025-07-06T23:56:45.568763070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:45.568956 containerd[1456]: time="2025-07-06T23:56:45.568868482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:45.595862 systemd[1]: Started cri-containerd-5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0.scope - libcontainer container 5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0. Jul 6 23:56:45.609220 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:56:45.634892 containerd[1456]: time="2025-07-06T23:56:45.634845883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4rlpb,Uid:580e7206-665f-4270-aab2-39eaf9dc4990,Namespace:kube-system,Attempt:1,} returns sandbox id \"5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0\"" Jul 6 23:56:45.635966 kubelet[2500]: E0706 23:56:45.635932 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:45.638392 containerd[1456]: time="2025-07-06T23:56:45.638326405Z" level=info msg="CreateContainer within sandbox \"5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:56:46.083041 systemd-networkd[1380]: calia03d9c0bbf7: Link UP Jul 6 23:56:46.084675 systemd-networkd[1380]: calia03d9c0bbf7: Gained carrier Jul 6 23:56:46.232906 systemd-networkd[1380]: cali24164d1aa95: Gained IPv6LL Jul 6 23:56:46.356882 kubelet[2500]: E0706 23:56:46.356776 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:45.900 [INFO][5028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0 goldmane-768f4c5c69- calico-system e80de9bd-3003-481f-b571-cacd2045a049 1059 0 2025-07-06 23:56:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-zk8l4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia03d9c0bbf7 [] [] }} ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Namespace="calico-system" Pod="goldmane-768f4c5c69-zk8l4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zk8l4-" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:45.900 [INFO][5028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Namespace="calico-system" Pod="goldmane-768f4c5c69-zk8l4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:45.924 [INFO][5044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" HandleID="k8s-pod-network.e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:45.925 [INFO][5044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" HandleID="k8s-pod-network.e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001396a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-zk8l4", "timestamp":"2025-07-06 23:56:45.924951033 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:45.925 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:45.925 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:45.925 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:45.931 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" host="localhost" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:45.937 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:45.942 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:46.019 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:46.021 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:46.021 [INFO][5044] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" host="localhost" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:46.022 [INFO][5044] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:46.028 [INFO][5044] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" host="localhost" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:46.076 [INFO][5044] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" host="localhost" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:46.076 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" host="localhost" Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:46.076 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:46.413762 containerd[1456]: 2025-07-06 23:56:46.076 [INFO][5044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" HandleID="k8s-pod-network.e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:46.414799 containerd[1456]: 2025-07-06 23:56:46.080 [INFO][5028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Namespace="calico-system" Pod="goldmane-768f4c5c69-zk8l4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e80de9bd-3003-481f-b571-cacd2045a049", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-zk8l4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia03d9c0bbf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:46.414799 containerd[1456]: 2025-07-06 23:56:46.080 [INFO][5028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Namespace="calico-system" Pod="goldmane-768f4c5c69-zk8l4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:46.414799 containerd[1456]: 2025-07-06 23:56:46.080 [INFO][5028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia03d9c0bbf7 ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Namespace="calico-system" Pod="goldmane-768f4c5c69-zk8l4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:46.414799 containerd[1456]: 2025-07-06 23:56:46.082 [INFO][5028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Namespace="calico-system" Pod="goldmane-768f4c5c69-zk8l4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:46.414799 containerd[1456]: 2025-07-06 23:56:46.083 [INFO][5028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Namespace="calico-system" Pod="goldmane-768f4c5c69-zk8l4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e80de9bd-3003-481f-b571-cacd2045a049", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d", Pod:"goldmane-768f4c5c69-zk8l4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia03d9c0bbf7", MAC:"1a:c0:73:fb:57:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:46.414799 containerd[1456]: 2025-07-06 23:56:46.410 [INFO][5028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d" Namespace="calico-system" Pod="goldmane-768f4c5c69-zk8l4" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:46.556080 containerd[1456]: time="2025-07-06T23:56:46.556030459Z" level=info msg="CreateContainer within sandbox \"5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"208038a47954cef5fd5ae0d39bcb18eefe240c6c5116c9f58a314bfc27a56d85\"" Jul 6 23:56:46.556757 containerd[1456]: time="2025-07-06T23:56:46.556610937Z" level=info msg="StartContainer for \"208038a47954cef5fd5ae0d39bcb18eefe240c6c5116c9f58a314bfc27a56d85\"" Jul 6 23:56:46.586870 systemd[1]: Started cri-containerd-208038a47954cef5fd5ae0d39bcb18eefe240c6c5116c9f58a314bfc27a56d85.scope - libcontainer container 208038a47954cef5fd5ae0d39bcb18eefe240c6c5116c9f58a314bfc27a56d85. Jul 6 23:56:46.627593 containerd[1456]: time="2025-07-06T23:56:46.626963847Z" level=info msg="StartContainer for \"208038a47954cef5fd5ae0d39bcb18eefe240c6c5116c9f58a314bfc27a56d85\" returns successfully" Jul 6 23:56:46.631464 containerd[1456]: time="2025-07-06T23:56:46.631309460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:46.631821 containerd[1456]: time="2025-07-06T23:56:46.631507248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:46.631821 containerd[1456]: time="2025-07-06T23:56:46.631531344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:46.631821 containerd[1456]: time="2025-07-06T23:56:46.631671321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:46.661079 systemd[1]: Started cri-containerd-e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d.scope - libcontainer container e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d. Jul 6 23:56:46.678455 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:56:46.708989 containerd[1456]: time="2025-07-06T23:56:46.708937561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-zk8l4,Uid:e80de9bd-3003-481f-b571-cacd2045a049,Namespace:calico-system,Attempt:1,} returns sandbox id \"e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d\"" Jul 6 23:56:47.000903 systemd-networkd[1380]: calibba0bcb807b: Gained IPv6LL Jul 6 23:56:47.361171 kubelet[2500]: E0706 23:56:47.361126 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:47.361609 kubelet[2500]: E0706 23:56:47.361210 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:47.372740 kubelet[2500]: I0706 23:56:47.372403 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4rlpb" podStartSLOduration=44.372384687 podStartE2EDuration="44.372384687s" podCreationTimestamp="2025-07-06 23:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:47.372117006 +0000 UTC m=+50.646151054" watchObservedRunningTime="2025-07-06 23:56:47.372384687 +0000 UTC m=+50.646418735" Jul 6 23:56:47.704979 systemd-networkd[1380]: calia03d9c0bbf7: Gained IPv6LL Jul 6 23:56:48.364422 kubelet[2500]: E0706 23:56:48.364366 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:48.466823 systemd[1]: Started sshd@10-10.0.0.101:22-10.0.0.1:50232.service - OpenSSH per-connection server daemon (10.0.0.1:50232). Jul 6 23:56:48.586487 containerd[1456]: time="2025-07-06T23:56:48.586420936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:48.587198 containerd[1456]: time="2025-07-06T23:56:48.587159737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 6 23:56:48.594891 containerd[1456]: time="2025-07-06T23:56:48.594812065Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:48.597087 containerd[1456]: time="2025-07-06T23:56:48.597040102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:48.598218 containerd[1456]: time="2025-07-06T23:56:48.598013190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.838549452s" Jul 6 23:56:48.598218 containerd[1456]: time="2025-07-06T23:56:48.598054719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:56:48.600147 containerd[1456]: time="2025-07-06T23:56:48.600113181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:56:48.600881 containerd[1456]: time="2025-07-06T23:56:48.600841122Z" level=info msg="CreateContainer within sandbox \"f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:56:48.635680 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 50232 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:48.637692 sshd[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:48.641945 systemd-logind[1443]: New session 11 of user core. Jul 6 23:56:48.651844 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:56:48.706926 containerd[1456]: time="2025-07-06T23:56:48.706860948Z" level=info msg="CreateContainer within sandbox \"f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f82cb70ecbdfb54efee75a20062899b9fa09c7fc8b4ad6e8124952d441ab510a\"" Jul 6 23:56:48.707983 containerd[1456]: time="2025-07-06T23:56:48.707950389Z" level=info msg="StartContainer for \"f82cb70ecbdfb54efee75a20062899b9fa09c7fc8b4ad6e8124952d441ab510a\"" Jul 6 23:56:48.744986 systemd[1]: Started cri-containerd-f82cb70ecbdfb54efee75a20062899b9fa09c7fc8b4ad6e8124952d441ab510a.scope - libcontainer container f82cb70ecbdfb54efee75a20062899b9fa09c7fc8b4ad6e8124952d441ab510a. Jul 6 23:56:48.902881 containerd[1456]: time="2025-07-06T23:56:48.901572659Z" level=info msg="StartContainer for \"f82cb70ecbdfb54efee75a20062899b9fa09c7fc8b4ad6e8124952d441ab510a\" returns successfully" Jul 6 23:56:48.908033 sshd[5178]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:48.915636 systemd[1]: sshd@10-10.0.0.101:22-10.0.0.1:50232.service: Deactivated successfully. Jul 6 23:56:48.917424 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:56:48.919640 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:56:48.926296 systemd[1]: Started sshd@11-10.0.0.101:22-10.0.0.1:50238.service - OpenSSH per-connection server daemon (10.0.0.1:50238). Jul 6 23:56:48.927448 systemd-logind[1443]: Removed session 11. Jul 6 23:56:48.966677 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 50238 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:48.966970 sshd[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:48.972110 systemd-logind[1443]: New session 12 of user core. Jul 6 23:56:48.978927 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:56:49.318997 sshd[5238]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:49.337821 systemd[1]: sshd@11-10.0.0.101:22-10.0.0.1:50238.service: Deactivated successfully. Jul 6 23:56:49.345408 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:56:49.351413 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:56:49.366511 systemd[1]: Started sshd@12-10.0.0.101:22-10.0.0.1:50246.service - OpenSSH per-connection server daemon (10.0.0.1:50246). Jul 6 23:56:49.367817 systemd-logind[1443]: Removed session 12. Jul 6 23:56:49.376290 kubelet[2500]: E0706 23:56:49.376227 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:49.420374 sshd[5250]: Accepted publickey for core from 10.0.0.1 port 50246 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:49.422256 sshd[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:49.427275 systemd-logind[1443]: New session 13 of user core. Jul 6 23:56:49.434869 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:56:49.681212 sshd[5250]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:49.687102 systemd[1]: sshd@12-10.0.0.101:22-10.0.0.1:50246.service: Deactivated successfully. Jul 6 23:56:49.689678 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:56:49.690528 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:56:49.692099 systemd-logind[1443]: Removed session 13. Jul 6 23:56:50.378904 kubelet[2500]: E0706 23:56:50.378861 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:52.761600 kubelet[2500]: I0706 23:56:52.760593 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cf7666946-t2wmg" podStartSLOduration=34.281517686 podStartE2EDuration="41.760567987s" podCreationTimestamp="2025-07-06 23:56:11 +0000 UTC" firstStartedPulling="2025-07-06 23:56:41.120238345 +0000 UTC m=+44.394272393" lastFinishedPulling="2025-07-06 23:56:48.599288646 +0000 UTC m=+51.873322694" observedRunningTime="2025-07-06 23:56:49.57000334 +0000 UTC m=+52.844037398" watchObservedRunningTime="2025-07-06 23:56:52.760567987 +0000 UTC m=+56.034602035" Jul 6 23:56:53.076353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228445716.mount: Deactivated successfully. Jul 6 23:56:53.459190 containerd[1456]: time="2025-07-06T23:56:53.459026790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:53.459925 containerd[1456]: time="2025-07-06T23:56:53.459876475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 6 23:56:53.461137 containerd[1456]: time="2025-07-06T23:56:53.461077911Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:53.463787 containerd[1456]: time="2025-07-06T23:56:53.463750673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:53.464497 containerd[1456]: time="2025-07-06T23:56:53.464441107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.864287537s" Jul 6 23:56:53.464545 containerd[1456]: time="2025-07-06T23:56:53.464483778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 6 23:56:53.465614 containerd[1456]: time="2025-07-06T23:56:53.465589741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:56:53.467111 containerd[1456]: time="2025-07-06T23:56:53.467064716Z" level=info msg="CreateContainer within sandbox \"06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:56:53.484388 containerd[1456]: time="2025-07-06T23:56:53.484326561Z" level=info msg="CreateContainer within sandbox \"06c5ad2080d3d1e8c37b4ce531e64389783e8d53262fd87fa9abc0d57baf2e9f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"68a1f4cbe33cf90ca57dc132c76cd923a95e93e5544740ed2528b67f55b7f72a\"" Jul 6 23:56:53.485014 containerd[1456]: time="2025-07-06T23:56:53.484971057Z" level=info msg="StartContainer for \"68a1f4cbe33cf90ca57dc132c76cd923a95e93e5544740ed2528b67f55b7f72a\"" Jul 6 23:56:53.514902 systemd[1]: Started cri-containerd-68a1f4cbe33cf90ca57dc132c76cd923a95e93e5544740ed2528b67f55b7f72a.scope - libcontainer container 68a1f4cbe33cf90ca57dc132c76cd923a95e93e5544740ed2528b67f55b7f72a. Jul 6 23:56:53.559963 containerd[1456]: time="2025-07-06T23:56:53.559919259Z" level=info msg="StartContainer for \"68a1f4cbe33cf90ca57dc132c76cd923a95e93e5544740ed2528b67f55b7f72a\" returns successfully" Jul 6 23:56:54.405662 kubelet[2500]: I0706 23:56:54.405599 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5489958cf9-jnrjz" podStartSLOduration=1.8968702450000001 podStartE2EDuration="15.405582108s" podCreationTimestamp="2025-07-06 23:56:39 +0000 UTC" firstStartedPulling="2025-07-06 23:56:39.956670071 +0000 UTC m=+43.230704120" lastFinishedPulling="2025-07-06 23:56:53.465381925 +0000 UTC m=+56.739415983" observedRunningTime="2025-07-06 23:56:54.405015893 +0000 UTC m=+57.679049941" watchObservedRunningTime="2025-07-06 23:56:54.405582108 +0000 UTC m=+57.679616156" Jul 6 23:56:54.705028 systemd[1]: Started sshd@13-10.0.0.101:22-10.0.0.1:33254.service - OpenSSH per-connection server daemon (10.0.0.1:33254). Jul 6 23:56:54.750963 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 33254 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:54.753283 sshd[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:54.758341 systemd-logind[1443]: New session 14 of user core. Jul 6 23:56:54.767897 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:56:54.902682 sshd[5328]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:54.907128 systemd[1]: sshd@13-10.0.0.101:22-10.0.0.1:33254.service: Deactivated successfully. Jul 6 23:56:54.909546 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:56:54.910188 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:56:54.911328 systemd-logind[1443]: Removed session 14. Jul 6 23:56:56.130164 containerd[1456]: time="2025-07-06T23:56:56.130078368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:56.140365 containerd[1456]: time="2025-07-06T23:56:56.140303856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 6 23:56:56.153591 containerd[1456]: time="2025-07-06T23:56:56.153528117Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:56.176264 containerd[1456]: time="2025-07-06T23:56:56.176223312Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:56.177182 containerd[1456]: time="2025-07-06T23:56:56.177148041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.711526791s" Jul 6 23:56:56.177229 containerd[1456]: time="2025-07-06T23:56:56.177191343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 6 23:56:56.178791 containerd[1456]: time="2025-07-06T23:56:56.178747413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:56:56.179865 containerd[1456]: time="2025-07-06T23:56:56.179821676Z" level=info msg="CreateContainer within sandbox \"a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:56:56.433025 containerd[1456]: time="2025-07-06T23:56:56.432864847Z" level=info msg="CreateContainer within sandbox \"a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"26fccbba6f0eaa415fd6c35fbdf6cf126c087c03d4720f14e2047c618c03fd4f\"" Jul 6 23:56:56.433311 containerd[1456]: time="2025-07-06T23:56:56.433276631Z" level=info msg="StartContainer for \"26fccbba6f0eaa415fd6c35fbdf6cf126c087c03d4720f14e2047c618c03fd4f\"" Jul 6 23:56:56.475910 systemd[1]: Started cri-containerd-26fccbba6f0eaa415fd6c35fbdf6cf126c087c03d4720f14e2047c618c03fd4f.scope - libcontainer container 26fccbba6f0eaa415fd6c35fbdf6cf126c087c03d4720f14e2047c618c03fd4f. Jul 6 23:56:56.547208 containerd[1456]: time="2025-07-06T23:56:56.547144480Z" level=info msg="StartContainer for \"26fccbba6f0eaa415fd6c35fbdf6cf126c087c03d4720f14e2047c618c03fd4f\" returns successfully" Jul 6 23:56:56.716605 containerd[1456]: time="2025-07-06T23:56:56.716435509Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:56.743435 containerd[1456]: time="2025-07-06T23:56:56.743346803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:56:56.745371 containerd[1456]: time="2025-07-06T23:56:56.745331227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 566.542436ms" Jul 6 23:56:56.745371 containerd[1456]: time="2025-07-06T23:56:56.745361384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:56:56.746217 containerd[1456]: time="2025-07-06T23:56:56.746170604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:56:56.747383 containerd[1456]: time="2025-07-06T23:56:56.747355457Z" level=info msg="CreateContainer within sandbox \"37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:56:56.816465 containerd[1456]: time="2025-07-06T23:56:56.816419192Z" level=info msg="StopPodSandbox for \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\"" Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.968 [WARNING][5390] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e80de9bd-3003-481f-b571-cacd2045a049", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d", Pod:"goldmane-768f4c5c69-zk8l4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia03d9c0bbf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.969 [INFO][5390] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.969 [INFO][5390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" iface="eth0" netns="" Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.969 [INFO][5390] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.969 [INFO][5390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.990 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" HandleID="k8s-pod-network.7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.990 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.990 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.996 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" HandleID="k8s-pod-network.7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.996 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" HandleID="k8s-pod-network.7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:56.998 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.004052 containerd[1456]: 2025-07-06 23:56:57.001 [INFO][5390] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:57.004052 containerd[1456]: time="2025-07-06T23:56:57.004011910Z" level=info msg="TearDown network for sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\" successfully" Jul 6 23:56:57.004052 containerd[1456]: time="2025-07-06T23:56:57.004042439Z" level=info msg="StopPodSandbox for \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\" returns successfully" Jul 6 23:56:57.004861 containerd[1456]: time="2025-07-06T23:56:57.004823514Z" level=info msg="RemovePodSandbox for \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\"" Jul 6 23:56:57.007131 containerd[1456]: time="2025-07-06T23:56:57.007098782Z" level=info msg="Forcibly stopping sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\"" Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.039 [WARNING][5422] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"e80de9bd-3003-481f-b571-cacd2045a049", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d", Pod:"goldmane-768f4c5c69-zk8l4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia03d9c0bbf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.039 [INFO][5422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.039 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" iface="eth0" netns="" Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.039 [INFO][5422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.039 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.064 [INFO][5432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" HandleID="k8s-pod-network.7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.064 [INFO][5432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.064 [INFO][5432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.070 [WARNING][5432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" HandleID="k8s-pod-network.7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.070 [INFO][5432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" HandleID="k8s-pod-network.7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Workload="localhost-k8s-goldmane--768f4c5c69--zk8l4-eth0" Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.072 [INFO][5432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.078323 containerd[1456]: 2025-07-06 23:56:57.075 [INFO][5422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee" Jul 6 23:56:57.078797 containerd[1456]: time="2025-07-06T23:56:57.078381745Z" level=info msg="TearDown network for sandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\" successfully" Jul 6 23:56:57.331620 containerd[1456]: time="2025-07-06T23:56:57.331552423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:57.332107 containerd[1456]: time="2025-07-06T23:56:57.331657213Z" level=info msg="RemovePodSandbox \"7284efede6d38b2086f89f461b5cc5d35cbf61b81244e60c5ce66728f01628ee\" returns successfully" Jul 6 23:56:57.332107 containerd[1456]: time="2025-07-06T23:56:57.331701417Z" level=info msg="CreateContainer within sandbox \"37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1d2c09bedae819e64be65abfdb01da045e9e5f9708c2bcab8ab8b60d025457d6\"" Jul 6 23:56:57.332345 containerd[1456]: time="2025-07-06T23:56:57.332305446Z" level=info msg="StartContainer for \"1d2c09bedae819e64be65abfdb01da045e9e5f9708c2bcab8ab8b60d025457d6\"" Jul 6 23:56:57.332520 containerd[1456]: time="2025-07-06T23:56:57.332460019Z" level=info msg="StopPodSandbox for \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\"" Jul 6 23:56:57.374087 systemd[1]: Started cri-containerd-1d2c09bedae819e64be65abfdb01da045e9e5f9708c2bcab8ab8b60d025457d6.scope - libcontainer container 1d2c09bedae819e64be65abfdb01da045e9e5f9708c2bcab8ab8b60d025457d6. Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.374 [WARNING][5454] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0", GenerateName:"calico-apiserver-5cf7666946-", Namespace:"calico-apiserver", SelfLink:"", UID:"0911fce1-3f9f-4337-b200-a55b72bf320f", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf7666946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38", Pod:"calico-apiserver-5cf7666946-9ld94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24164d1aa95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.375 [INFO][5454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.375 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" iface="eth0" netns="" Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.375 [INFO][5454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.375 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.398 [INFO][5474] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" HandleID="k8s-pod-network.c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.398 [INFO][5474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.398 [INFO][5474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.404 [WARNING][5474] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" HandleID="k8s-pod-network.c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.404 [INFO][5474] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" HandleID="k8s-pod-network.c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.406 [INFO][5474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.413885 containerd[1456]: 2025-07-06 23:56:57.410 [INFO][5454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:57.414328 containerd[1456]: time="2025-07-06T23:56:57.413907956Z" level=info msg="TearDown network for sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\" successfully" Jul 6 23:56:57.414328 containerd[1456]: time="2025-07-06T23:56:57.413936290Z" level=info msg="StopPodSandbox for \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\" returns successfully" Jul 6 23:56:57.414328 containerd[1456]: time="2025-07-06T23:56:57.414269915Z" level=info msg="RemovePodSandbox for \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\"" Jul 6 23:56:57.414328 containerd[1456]: time="2025-07-06T23:56:57.414296846Z" level=info msg="Forcibly stopping sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\"" Jul 6 23:56:57.425757 containerd[1456]: time="2025-07-06T23:56:57.425682321Z" level=info msg="StartContainer for \"1d2c09bedae819e64be65abfdb01da045e9e5f9708c2bcab8ab8b60d025457d6\" returns successfully" Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.453 [WARNING][5506] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0", GenerateName:"calico-apiserver-5cf7666946-", Namespace:"calico-apiserver", SelfLink:"", UID:"0911fce1-3f9f-4337-b200-a55b72bf320f", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf7666946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37c6efd28b62edd64e08102445284d00562cf3cb92a5b8aebe49ae064469fa38", Pod:"calico-apiserver-5cf7666946-9ld94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24164d1aa95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.453 [INFO][5506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.453 [INFO][5506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" iface="eth0" netns="" Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.453 [INFO][5506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.453 [INFO][5506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.477 [INFO][5520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" HandleID="k8s-pod-network.c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.477 [INFO][5520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.477 [INFO][5520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.483 [WARNING][5520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" HandleID="k8s-pod-network.c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.483 [INFO][5520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" HandleID="k8s-pod-network.c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Workload="localhost-k8s-calico--apiserver--5cf7666946--9ld94-eth0" Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.485 [INFO][5520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.490865 containerd[1456]: 2025-07-06 23:56:57.487 [INFO][5506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f" Jul 6 23:56:57.491390 containerd[1456]: time="2025-07-06T23:56:57.490897011Z" level=info msg="TearDown network for sandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\" successfully" Jul 6 23:56:57.506807 containerd[1456]: time="2025-07-06T23:56:57.506753141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:57.506903 containerd[1456]: time="2025-07-06T23:56:57.506828844Z" level=info msg="RemovePodSandbox \"c89473dd4a23810bfa8073d3a6aac967f2f3a828d2d6beb88869d1feb1f9a12f\" returns successfully" Jul 6 23:56:57.507420 containerd[1456]: time="2025-07-06T23:56:57.507396213Z" level=info msg="StopPodSandbox for \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\"" Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.547 [WARNING][5542] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8jzdf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"573e83ed-8e01-4333-9a22-d115fe0e7655", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f", Pod:"csi-node-driver-8jzdf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a29a2145d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.548 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.548 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" iface="eth0" netns="" Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.548 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.548 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.572 [INFO][5551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" HandleID="k8s-pod-network.36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.572 [INFO][5551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.572 [INFO][5551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.578 [WARNING][5551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" HandleID="k8s-pod-network.36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.578 [INFO][5551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" HandleID="k8s-pod-network.36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.580 [INFO][5551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.587029 containerd[1456]: 2025-07-06 23:56:57.583 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:57.587029 containerd[1456]: time="2025-07-06T23:56:57.586985074Z" level=info msg="TearDown network for sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\" successfully" Jul 6 23:56:57.587029 containerd[1456]: time="2025-07-06T23:56:57.587012296Z" level=info msg="StopPodSandbox for \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\" returns successfully" Jul 6 23:56:57.588421 containerd[1456]: time="2025-07-06T23:56:57.588273325Z" level=info msg="RemovePodSandbox for \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\"" Jul 6 23:56:57.588421 containerd[1456]: time="2025-07-06T23:56:57.588307350Z" level=info msg="Forcibly stopping sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\"" Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.624 [WARNING][5568] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8jzdf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"573e83ed-8e01-4333-9a22-d115fe0e7655", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f", Pod:"csi-node-driver-8jzdf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a29a2145d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.625 [INFO][5568] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.625 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" iface="eth0" netns="" Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.625 [INFO][5568] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.625 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.649 [INFO][5577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" HandleID="k8s-pod-network.36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.649 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.649 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.656 [WARNING][5577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" HandleID="k8s-pod-network.36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.656 [INFO][5577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" HandleID="k8s-pod-network.36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Workload="localhost-k8s-csi--node--driver--8jzdf-eth0" Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.657 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.664012 containerd[1456]: 2025-07-06 23:56:57.660 [INFO][5568] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb" Jul 6 23:56:57.664463 containerd[1456]: time="2025-07-06T23:56:57.664055795Z" level=info msg="TearDown network for sandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\" successfully" Jul 6 23:56:57.683521 containerd[1456]: time="2025-07-06T23:56:57.683457438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:57.683592 containerd[1456]: time="2025-07-06T23:56:57.683565603Z" level=info msg="RemovePodSandbox \"36482637f18be9121edc3f92494e48b0e792f2009c614fc95d36d3e7c89499fb\" returns successfully" Jul 6 23:56:57.684265 containerd[1456]: time="2025-07-06T23:56:57.684210340Z" level=info msg="StopPodSandbox for \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\"" Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.718 [WARNING][5595] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0", GenerateName:"calico-apiserver-5cf7666946-", Namespace:"calico-apiserver", SelfLink:"", UID:"0aebe8d7-c736-40fd-a06c-2169cc2c7e1f", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf7666946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9", Pod:"calico-apiserver-5cf7666946-t2wmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c05b9f21fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.719 [INFO][5595] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.719 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" iface="eth0" netns="" Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.719 [INFO][5595] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.719 [INFO][5595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.744 [INFO][5603] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" HandleID="k8s-pod-network.51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.744 [INFO][5603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.744 [INFO][5603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.751 [WARNING][5603] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" HandleID="k8s-pod-network.51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.751 [INFO][5603] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" HandleID="k8s-pod-network.51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.753 [INFO][5603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.759224 containerd[1456]: 2025-07-06 23:56:57.756 [INFO][5595] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:57.759731 containerd[1456]: time="2025-07-06T23:56:57.759266770Z" level=info msg="TearDown network for sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\" successfully" Jul 6 23:56:57.759731 containerd[1456]: time="2025-07-06T23:56:57.759294011Z" level=info msg="StopPodSandbox for \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\" returns successfully" Jul 6 23:56:57.760001 containerd[1456]: time="2025-07-06T23:56:57.759967121Z" level=info msg="RemovePodSandbox for \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\"" Jul 6 23:56:57.760039 containerd[1456]: time="2025-07-06T23:56:57.760015594Z" level=info msg="Forcibly stopping sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\"" Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.793 [WARNING][5621] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0", GenerateName:"calico-apiserver-5cf7666946-", Namespace:"calico-apiserver", SelfLink:"", UID:"0aebe8d7-c736-40fd-a06c-2169cc2c7e1f", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf7666946", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f59040837d24ac9c068185c7ba88b88d79def198024b9a9f151fb75aa9efc1b9", Pod:"calico-apiserver-5cf7666946-t2wmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c05b9f21fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.794 [INFO][5621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.794 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" iface="eth0" netns="" Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.794 [INFO][5621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.794 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.816 [INFO][5629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" HandleID="k8s-pod-network.51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.817 [INFO][5629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.817 [INFO][5629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.824 [WARNING][5629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" HandleID="k8s-pod-network.51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.824 [INFO][5629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" HandleID="k8s-pod-network.51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Workload="localhost-k8s-calico--apiserver--5cf7666946--t2wmg-eth0" Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.825 [INFO][5629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.836353 containerd[1456]: 2025-07-06 23:56:57.833 [INFO][5621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3" Jul 6 23:56:57.837832 containerd[1456]: time="2025-07-06T23:56:57.836370724Z" level=info msg="TearDown network for sandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\" successfully" Jul 6 23:56:57.841094 containerd[1456]: time="2025-07-06T23:56:57.841032681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:57.841251 containerd[1456]: time="2025-07-06T23:56:57.841150425Z" level=info msg="RemovePodSandbox \"51d8f1936d8d8f41e471843b99529a27156e7ac44545e03434e74e10f9aedbb3\" returns successfully" Jul 6 23:56:57.842055 containerd[1456]: time="2025-07-06T23:56:57.842031892Z" level=info msg="StopPodSandbox for \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\"" Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.884 [WARNING][5647] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0", GenerateName:"calico-kube-controllers-5b86b4658f-", Namespace:"calico-system", SelfLink:"", UID:"4d8809a0-b8bb-4f42-8da7-d29046d2f152", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b86b4658f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88", Pod:"calico-kube-controllers-5b86b4658f-c279f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aab3ab07c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.885 [INFO][5647] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.885 [INFO][5647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" iface="eth0" netns="" Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.885 [INFO][5647] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.885 [INFO][5647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.909 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" HandleID="k8s-pod-network.a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.910 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.910 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.918 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" HandleID="k8s-pod-network.a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.918 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" HandleID="k8s-pod-network.a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.921 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:57.928273 containerd[1456]: 2025-07-06 23:56:57.925 [INFO][5647] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:57.928273 containerd[1456]: time="2025-07-06T23:56:57.927996148Z" level=info msg="TearDown network for sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\" successfully" Jul 6 23:56:57.928273 containerd[1456]: time="2025-07-06T23:56:57.928023230Z" level=info msg="StopPodSandbox for \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\" returns successfully" Jul 6 23:56:57.929380 containerd[1456]: time="2025-07-06T23:56:57.928595338Z" level=info msg="RemovePodSandbox for \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\"" Jul 6 23:56:57.929380 containerd[1456]: time="2025-07-06T23:56:57.928622871Z" level=info msg="Forcibly stopping sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\"" Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.968 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0", GenerateName:"calico-kube-controllers-5b86b4658f-", Namespace:"calico-system", SelfLink:"", UID:"4d8809a0-b8bb-4f42-8da7-d29046d2f152", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b86b4658f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82dcf02e86c0cb83a691ea9393fc92a43e67d58778685d21f35236f77ec55f88", Pod:"calico-kube-controllers-5b86b4658f-c279f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5aab3ab07c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.968 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.968 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" iface="eth0" netns="" Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.968 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.968 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.989 [INFO][5682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" HandleID="k8s-pod-network.a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.990 [INFO][5682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.990 [INFO][5682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.996 [WARNING][5682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" HandleID="k8s-pod-network.a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.996 [INFO][5682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" HandleID="k8s-pod-network.a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Workload="localhost-k8s-calico--kube--controllers--5b86b4658f--c279f-eth0" Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:57.999 [INFO][5682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:58.008678 containerd[1456]: 2025-07-06 23:56:58.004 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d" Jul 6 23:56:58.009343 containerd[1456]: time="2025-07-06T23:56:58.008762479Z" level=info msg="TearDown network for sandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\" successfully" Jul 6 23:56:58.053640 containerd[1456]: time="2025-07-06T23:56:58.053575579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:58.053798 containerd[1456]: time="2025-07-06T23:56:58.053659659Z" level=info msg="RemovePodSandbox \"a2637fb8843c5d23ba6743be9d165bf1710eb1416be0895743178bd9b9a2282d\" returns successfully" Jul 6 23:56:58.054378 containerd[1456]: time="2025-07-06T23:56:58.054231367Z" level=info msg="StopPodSandbox for \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\"" Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.094 [WARNING][5699] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"580e7206-665f-4270-aab2-39eaf9dc4990", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0", Pod:"coredns-668d6bf9bc-4rlpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibba0bcb807b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.096 [INFO][5699] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.096 [INFO][5699] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" iface="eth0" netns="" Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.096 [INFO][5699] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.096 [INFO][5699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.118 [INFO][5709] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" HandleID="k8s-pod-network.354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.118 [INFO][5709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.119 [INFO][5709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.124 [WARNING][5709] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" HandleID="k8s-pod-network.354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.124 [INFO][5709] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" HandleID="k8s-pod-network.354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.126 [INFO][5709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:58.133124 containerd[1456]: 2025-07-06 23:56:58.129 [INFO][5699] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:58.133124 containerd[1456]: time="2025-07-06T23:56:58.133065890Z" level=info msg="TearDown network for sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\" successfully" Jul 6 23:56:58.133124 containerd[1456]: time="2025-07-06T23:56:58.133091279Z" level=info msg="StopPodSandbox for \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\" returns successfully" Jul 6 23:56:58.134025 containerd[1456]: time="2025-07-06T23:56:58.133999025Z" level=info msg="RemovePodSandbox for \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\"" Jul 6 23:56:58.134067 containerd[1456]: time="2025-07-06T23:56:58.134040915Z" level=info msg="Forcibly stopping sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\"" Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.171 [WARNING][5727] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"580e7206-665f-4270-aab2-39eaf9dc4990", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5303f1771277437eb1bb6d6cfcc2f6fb2c5ee29ab2ef705d96119c0b0006a5a0", Pod:"coredns-668d6bf9bc-4rlpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibba0bcb807b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.171 [INFO][5727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.171 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" iface="eth0" netns="" Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.171 [INFO][5727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.171 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.218 [INFO][5736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" HandleID="k8s-pod-network.354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.219 [INFO][5736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.219 [INFO][5736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.226 [WARNING][5736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" HandleID="k8s-pod-network.354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.226 [INFO][5736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" HandleID="k8s-pod-network.354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Workload="localhost-k8s-coredns--668d6bf9bc--4rlpb-eth0" Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.229 [INFO][5736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:58.240333 containerd[1456]: 2025-07-06 23:56:58.236 [INFO][5727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6" Jul 6 23:56:58.240910 containerd[1456]: time="2025-07-06T23:56:58.240389594Z" level=info msg="TearDown network for sandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\" successfully" Jul 6 23:56:58.247047 containerd[1456]: time="2025-07-06T23:56:58.245248486Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:58.247047 containerd[1456]: time="2025-07-06T23:56:58.245313620Z" level=info msg="RemovePodSandbox \"354aa8681413f38437a0469d9b2aaff9a6f520c68f0c4192d4ae6beccc29d6a6\" returns successfully" Jul 6 23:56:58.247047 containerd[1456]: time="2025-07-06T23:56:58.245963215Z" level=info msg="StopPodSandbox for \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\"" Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.283 [WARNING][5753] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" WorkloadEndpoint="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.284 [INFO][5753] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.284 [INFO][5753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" iface="eth0" netns="" Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.284 [INFO][5753] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.284 [INFO][5753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.308 [INFO][5762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" HandleID="k8s-pod-network.b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Workload="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.309 [INFO][5762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.309 [INFO][5762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.314 [WARNING][5762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" HandleID="k8s-pod-network.b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Workload="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.314 [INFO][5762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" HandleID="k8s-pod-network.b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Workload="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.315 [INFO][5762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:58.321657 containerd[1456]: 2025-07-06 23:56:58.318 [INFO][5753] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:58.322110 containerd[1456]: time="2025-07-06T23:56:58.321686691Z" level=info msg="TearDown network for sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\" successfully" Jul 6 23:56:58.322110 containerd[1456]: time="2025-07-06T23:56:58.321742558Z" level=info msg="StopPodSandbox for \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\" returns successfully" Jul 6 23:56:58.322331 containerd[1456]: time="2025-07-06T23:56:58.322307282Z" level=info msg="RemovePodSandbox for \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\"" Jul 6 23:56:58.322389 containerd[1456]: time="2025-07-06T23:56:58.322335164Z" level=info msg="Forcibly stopping sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\"" Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.355 [WARNING][5779] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" WorkloadEndpoint="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.355 [INFO][5779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.355 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" iface="eth0" netns="" Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.355 [INFO][5779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.355 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.385 [INFO][5788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" HandleID="k8s-pod-network.b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Workload="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.385 [INFO][5788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.385 [INFO][5788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.394 [WARNING][5788] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" HandleID="k8s-pod-network.b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Workload="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.394 [INFO][5788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" HandleID="k8s-pod-network.b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Workload="localhost-k8s-whisker--8666bcbf65--nclb4-eth0" Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.396 [INFO][5788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:58.407438 containerd[1456]: 2025-07-06 23:56:58.399 [INFO][5779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967" Jul 6 23:56:58.407438 containerd[1456]: time="2025-07-06T23:56:58.407380942Z" level=info msg="TearDown network for sandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\" successfully" Jul 6 23:56:58.418505 containerd[1456]: time="2025-07-06T23:56:58.418411403Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:58.418505 containerd[1456]: time="2025-07-06T23:56:58.418508348Z" level=info msg="RemovePodSandbox \"b140c358f572357657e0ad131e9a49d0dc35cb7c6e8b7147e56f84c408e11967\" returns successfully" Jul 6 23:56:58.418971 containerd[1456]: time="2025-07-06T23:56:58.418936813Z" level=info msg="StopPodSandbox for \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\"" Jul 6 23:56:58.485520 kubelet[2500]: I0706 23:56:58.485320 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5cf7666946-9ld94" podStartSLOduration=36.179181894 podStartE2EDuration="47.484692799s" podCreationTimestamp="2025-07-06 23:56:11 +0000 UTC" firstStartedPulling="2025-07-06 23:56:45.440575739 +0000 UTC m=+48.714609787" lastFinishedPulling="2025-07-06 23:56:56.746086644 +0000 UTC m=+60.020120692" observedRunningTime="2025-07-06 23:56:58.444436146 +0000 UTC m=+61.718470204" watchObservedRunningTime="2025-07-06 23:56:58.484692799 +0000 UTC m=+61.758726847" Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.474 [WARNING][5806] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--29zf2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dd93b232-dbe2-459a-a97f-dd73be2c49bc", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2", Pod:"coredns-668d6bf9bc-29zf2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82bb0618dbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.475 [INFO][5806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.475 [INFO][5806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" iface="eth0" netns="" Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.475 [INFO][5806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.475 [INFO][5806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.515 [INFO][5819] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" HandleID="k8s-pod-network.ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.515 [INFO][5819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.515 [INFO][5819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.521 [WARNING][5819] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" HandleID="k8s-pod-network.ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.521 [INFO][5819] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" HandleID="k8s-pod-network.ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.522 [INFO][5819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:58.529074 containerd[1456]: 2025-07-06 23:56:58.526 [INFO][5806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:58.529597 containerd[1456]: time="2025-07-06T23:56:58.529121579Z" level=info msg="TearDown network for sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\" successfully" Jul 6 23:56:58.529597 containerd[1456]: time="2025-07-06T23:56:58.529149481Z" level=info msg="StopPodSandbox for \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\" returns successfully" Jul 6 23:56:58.529908 containerd[1456]: time="2025-07-06T23:56:58.529860794Z" level=info msg="RemovePodSandbox for \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\"" Jul 6 23:56:58.529908 containerd[1456]: time="2025-07-06T23:56:58.529905390Z" level=info msg="Forcibly stopping sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\"" Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.567 [WARNING][5839] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--29zf2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dd93b232-dbe2-459a-a97f-dd73be2c49bc", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 56, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3edd0c8c44dfa83ac1bb8932c750510dddd1a5f62eac64509a2ad95dd25b6d2", Pod:"coredns-668d6bf9bc-29zf2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82bb0618dbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.567 [INFO][5839] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.567 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" iface="eth0" netns="" Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.567 [INFO][5839] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.567 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.598 [INFO][5848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" HandleID="k8s-pod-network.ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.598 [INFO][5848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.598 [INFO][5848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.604 [WARNING][5848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" HandleID="k8s-pod-network.ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.604 [INFO][5848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" HandleID="k8s-pod-network.ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Workload="localhost-k8s-coredns--668d6bf9bc--29zf2-eth0" Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.606 [INFO][5848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:56:58.613696 containerd[1456]: 2025-07-06 23:56:58.610 [INFO][5839] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3" Jul 6 23:56:58.615019 containerd[1456]: time="2025-07-06T23:56:58.613743410Z" level=info msg="TearDown network for sandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\" successfully" Jul 6 23:56:58.640731 containerd[1456]: time="2025-07-06T23:56:58.640681851Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:56:58.640813 containerd[1456]: time="2025-07-06T23:56:58.640752635Z" level=info msg="RemovePodSandbox \"ec8a6645ba2de5c7c157a76e4225a965eddb2ebb499d6f18cfbeb7898faaf0b3\" returns successfully" Jul 6 23:56:59.929052 systemd[1]: Started sshd@14-10.0.0.101:22-10.0.0.1:51906.service - OpenSSH per-connection server daemon (10.0.0.1:51906). Jul 6 23:56:59.977820 sshd[5860]: Accepted publickey for core from 10.0.0.1 port 51906 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:59.979258 sshd[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:59.984615 systemd-logind[1443]: New session 15 of user core. Jul 6 23:56:59.993969 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:57:00.113927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1647496460.mount: Deactivated successfully. Jul 6 23:57:00.274197 sshd[5860]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:00.282943 systemd[1]: sshd@14-10.0.0.101:22-10.0.0.1:51906.service: Deactivated successfully. Jul 6 23:57:00.285465 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:57:00.286286 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:57:00.287578 systemd-logind[1443]: Removed session 15. Jul 6 23:57:01.277584 containerd[1456]: time="2025-07-06T23:57:01.276836124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:01.336044 containerd[1456]: time="2025-07-06T23:57:01.335942527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 6 23:57:01.358825 containerd[1456]: time="2025-07-06T23:57:01.358764554Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:01.404840 containerd[1456]: time="2025-07-06T23:57:01.404776317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:01.405564 containerd[1456]: time="2025-07-06T23:57:01.405491467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.659293572s" Jul 6 23:57:01.405753 containerd[1456]: time="2025-07-06T23:57:01.405672762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 6 23:57:01.407517 containerd[1456]: time="2025-07-06T23:57:01.407481403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:57:01.409175 containerd[1456]: time="2025-07-06T23:57:01.409109329Z" level=info msg="CreateContainer within sandbox \"e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:57:01.768195 containerd[1456]: time="2025-07-06T23:57:01.768144357Z" level=info msg="CreateContainer within sandbox \"e98b1633240804652b377b0440b28bc96e54bac5db4edc1dfc5ab38b4fe14a6d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"91b6cf41f7606861ea59aa50cd0624bb409bb9126db1fda565ed09cde2505161\"" Jul 6 23:57:01.769150 containerd[1456]: time="2025-07-06T23:57:01.769081810Z" level=info msg="StartContainer for \"91b6cf41f7606861ea59aa50cd0624bb409bb9126db1fda565ed09cde2505161\"" Jul 6 23:57:01.841959 systemd[1]: Started cri-containerd-91b6cf41f7606861ea59aa50cd0624bb409bb9126db1fda565ed09cde2505161.scope - libcontainer container 91b6cf41f7606861ea59aa50cd0624bb409bb9126db1fda565ed09cde2505161. Jul 6 23:57:01.890965 containerd[1456]: time="2025-07-06T23:57:01.890915266Z" level=info msg="StartContainer for \"91b6cf41f7606861ea59aa50cd0624bb409bb9126db1fda565ed09cde2505161\" returns successfully" Jul 6 23:57:03.260947 containerd[1456]: time="2025-07-06T23:57:03.260895761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:03.261761 containerd[1456]: time="2025-07-06T23:57:03.261727463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 6 23:57:03.263174 containerd[1456]: time="2025-07-06T23:57:03.263101308Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:03.265516 containerd[1456]: time="2025-07-06T23:57:03.265433855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:03.266606 containerd[1456]: time="2025-07-06T23:57:03.266017195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.85850825s" Jul 6 23:57:03.266606 containerd[1456]: time="2025-07-06T23:57:03.266057501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 6 23:57:03.268301 containerd[1456]: time="2025-07-06T23:57:03.268238512Z" level=info msg="CreateContainer within sandbox \"a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:57:03.287234 containerd[1456]: time="2025-07-06T23:57:03.287182093Z" level=info msg="CreateContainer within sandbox \"a415452966ce454219ccaeb47de0d984d9168211d360c8babfc5664140fb3e1f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"78d76e71b9d72f584259dfbf95825063bcc3d9771d0f062840f7e63f428d80ea\"" Jul 6 23:57:03.287987 containerd[1456]: time="2025-07-06T23:57:03.287955524Z" level=info msg="StartContainer for \"78d76e71b9d72f584259dfbf95825063bcc3d9771d0f062840f7e63f428d80ea\"" Jul 6 23:57:03.332059 systemd[1]: Started cri-containerd-78d76e71b9d72f584259dfbf95825063bcc3d9771d0f062840f7e63f428d80ea.scope - libcontainer container 78d76e71b9d72f584259dfbf95825063bcc3d9771d0f062840f7e63f428d80ea. Jul 6 23:57:03.367364 containerd[1456]: time="2025-07-06T23:57:03.367312185Z" level=info msg="StartContainer for \"78d76e71b9d72f584259dfbf95825063bcc3d9771d0f062840f7e63f428d80ea\" returns successfully" Jul 6 23:57:03.458243 kubelet[2500]: I0706 23:57:03.457678 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-zk8l4" podStartSLOduration=35.761584857 podStartE2EDuration="50.457653385s" podCreationTimestamp="2025-07-06 23:56:13 +0000 UTC" firstStartedPulling="2025-07-06 23:56:46.710504586 +0000 UTC m=+49.984538634" lastFinishedPulling="2025-07-06 23:57:01.406573114 +0000 UTC m=+64.680607162" observedRunningTime="2025-07-06 23:57:02.449426814 +0000 UTC m=+65.723460862" watchObservedRunningTime="2025-07-06 23:57:03.457653385 +0000 UTC m=+66.731687433" Jul 6 23:57:03.978696 kubelet[2500]: I0706 23:57:03.978652 2500 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:57:03.978881 kubelet[2500]: I0706 23:57:03.978729 2500 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:57:05.286412 systemd[1]: Started sshd@15-10.0.0.101:22-10.0.0.1:51922.service - OpenSSH per-connection server daemon (10.0.0.1:51922). Jul 6 23:57:05.343594 sshd[6022]: Accepted publickey for core from 10.0.0.1 port 51922 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:05.345663 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:05.349911 systemd-logind[1443]: New session 16 of user core. Jul 6 23:57:05.359911 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:57:05.552634 sshd[6022]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:05.556827 systemd[1]: sshd@15-10.0.0.101:22-10.0.0.1:51922.service: Deactivated successfully. Jul 6 23:57:05.558916 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:57:05.559536 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:57:05.560523 systemd-logind[1443]: Removed session 16. Jul 6 23:57:10.297224 kubelet[2500]: I0706 23:57:10.296978 2500 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8jzdf" podStartSLOduration=36.877689115 podStartE2EDuration="56.296961821s" podCreationTimestamp="2025-07-06 23:56:14 +0000 UTC" firstStartedPulling="2025-07-06 23:56:43.847500688 +0000 UTC m=+47.121534736" lastFinishedPulling="2025-07-06 23:57:03.266773384 +0000 UTC m=+66.540807442" observedRunningTime="2025-07-06 23:57:03.460755697 +0000 UTC m=+66.734789745" watchObservedRunningTime="2025-07-06 23:57:10.296961821 +0000 UTC m=+73.570995869" Jul 6 23:57:10.570060 systemd[1]: Started sshd@16-10.0.0.101:22-10.0.0.1:52398.service - OpenSSH per-connection server daemon (10.0.0.1:52398). Jul 6 23:57:10.612898 sshd[6062]: Accepted publickey for core from 10.0.0.1 port 52398 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:10.615016 sshd[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:10.619836 systemd-logind[1443]: New session 17 of user core. Jul 6 23:57:10.627920 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:57:10.835942 sshd[6062]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:10.840286 systemd[1]: sshd@16-10.0.0.101:22-10.0.0.1:52398.service: Deactivated successfully. Jul 6 23:57:10.842695 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:57:10.843445 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:57:10.844511 systemd-logind[1443]: Removed session 17. Jul 6 23:57:11.829365 kubelet[2500]: E0706 23:57:11.829317 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:15.852569 systemd[1]: Started sshd@17-10.0.0.101:22-10.0.0.1:52408.service - OpenSSH per-connection server daemon (10.0.0.1:52408). Jul 6 23:57:15.916209 sshd[6099]: Accepted publickey for core from 10.0.0.1 port 52408 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:15.918341 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:15.923088 systemd-logind[1443]: New session 18 of user core. Jul 6 23:57:15.928928 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:57:16.141585 sshd[6099]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:16.152148 systemd[1]: sshd@17-10.0.0.101:22-10.0.0.1:52408.service: Deactivated successfully. Jul 6 23:57:16.154426 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:57:16.156659 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:57:16.169089 systemd[1]: Started sshd@18-10.0.0.101:22-10.0.0.1:52410.service - OpenSSH per-connection server daemon (10.0.0.1:52410). Jul 6 23:57:16.169834 systemd-logind[1443]: Removed session 18. Jul 6 23:57:16.202777 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 52410 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:16.205048 sshd[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:16.210349 systemd-logind[1443]: New session 19 of user core. Jul 6 23:57:16.220907 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:57:16.415484 systemd[1]: run-containerd-runc-k8s.io-06c9c0cac1f75528787b4dd9a27434a52b62de7be575aa524f2af4abd09fe4c7-runc.GuNvi7.mount: Deactivated successfully. Jul 6 23:57:16.528639 sshd[6113]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:16.537338 systemd[1]: sshd@18-10.0.0.101:22-10.0.0.1:52410.service: Deactivated successfully. Jul 6 23:57:16.539310 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:57:16.540820 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:57:16.548995 systemd[1]: Started sshd@19-10.0.0.101:22-10.0.0.1:52416.service - OpenSSH per-connection server daemon (10.0.0.1:52416). Jul 6 23:57:16.549955 systemd-logind[1443]: Removed session 19. Jul 6 23:57:16.583153 sshd[6144]: Accepted publickey for core from 10.0.0.1 port 52416 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:16.584837 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:16.589244 systemd-logind[1443]: New session 20 of user core. Jul 6 23:57:16.596854 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:57:17.337640 sshd[6144]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:17.349705 systemd[1]: sshd@19-10.0.0.101:22-10.0.0.1:52416.service: Deactivated successfully. Jul 6 23:57:17.353250 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:57:17.354816 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:57:17.363570 systemd[1]: Started sshd@20-10.0.0.101:22-10.0.0.1:52422.service - OpenSSH per-connection server daemon (10.0.0.1:52422). Jul 6 23:57:17.367405 systemd-logind[1443]: Removed session 20. Jul 6 23:57:17.414254 sshd[6166]: Accepted publickey for core from 10.0.0.1 port 52422 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:17.416175 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:17.420537 systemd-logind[1443]: New session 21 of user core. Jul 6 23:57:17.431909 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:57:17.799123 sshd[6166]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:17.809747 systemd[1]: sshd@20-10.0.0.101:22-10.0.0.1:52422.service: Deactivated successfully. Jul 6 23:57:17.812227 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:57:17.814367 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:57:17.823314 systemd[1]: Started sshd@21-10.0.0.101:22-10.0.0.1:52428.service - OpenSSH per-connection server daemon (10.0.0.1:52428). Jul 6 23:57:17.824424 systemd-logind[1443]: Removed session 21. Jul 6 23:57:17.861407 sshd[6179]: Accepted publickey for core from 10.0.0.1 port 52428 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:17.863439 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:17.868250 systemd-logind[1443]: New session 22 of user core. Jul 6 23:57:17.879849 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:57:18.013074 sshd[6179]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:18.018395 systemd[1]: sshd@21-10.0.0.101:22-10.0.0.1:52428.service: Deactivated successfully. Jul 6 23:57:18.021580 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:57:18.023052 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:57:18.024279 systemd-logind[1443]: Removed session 22. Jul 6 23:57:21.829521 kubelet[2500]: E0706 23:57:21.829464 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:23.025738 systemd[1]: Started sshd@22-10.0.0.101:22-10.0.0.1:48556.service - OpenSSH per-connection server daemon (10.0.0.1:48556). Jul 6 23:57:23.076354 sshd[6204]: Accepted publickey for core from 10.0.0.1 port 48556 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:23.078110 sshd[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:23.082642 systemd-logind[1443]: New session 23 of user core. Jul 6 23:57:23.091874 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:57:23.233854 sshd[6204]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:23.237787 systemd[1]: sshd@22-10.0.0.101:22-10.0.0.1:48556.service: Deactivated successfully. Jul 6 23:57:23.240037 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:57:23.240781 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:57:23.241634 systemd-logind[1443]: Removed session 23. Jul 6 23:57:23.829384 kubelet[2500]: E0706 23:57:23.829339 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:28.247530 systemd[1]: Started sshd@23-10.0.0.101:22-10.0.0.1:48572.service - OpenSSH per-connection server daemon (10.0.0.1:48572). Jul 6 23:57:28.290561 sshd[6222]: Accepted publickey for core from 10.0.0.1 port 48572 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:28.292351 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:28.296551 systemd-logind[1443]: New session 24 of user core. Jul 6 23:57:28.303857 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:57:28.539480 sshd[6222]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:28.544189 systemd[1]: sshd@23-10.0.0.101:22-10.0.0.1:48572.service: Deactivated successfully. Jul 6 23:57:28.546471 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:57:28.547161 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:57:28.548237 systemd-logind[1443]: Removed session 24. Jul 6 23:57:29.829427 kubelet[2500]: E0706 23:57:29.829374 2500 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:33.561368 systemd[1]: Started sshd@24-10.0.0.101:22-10.0.0.1:35422.service - OpenSSH per-connection server daemon (10.0.0.1:35422). Jul 6 23:57:33.616214 sshd[6257]: Accepted publickey for core from 10.0.0.1 port 35422 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:33.618061 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:33.625340 systemd-logind[1443]: New session 25 of user core. Jul 6 23:57:33.636946 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:57:33.756769 sshd[6257]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:33.761445 systemd[1]: sshd@24-10.0.0.101:22-10.0.0.1:35422.service: Deactivated successfully. Jul 6 23:57:33.763605 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:57:33.764364 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:57:33.765398 systemd-logind[1443]: Removed session 25.