May 17 00:19:45.862968 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:19:45.862988 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:45.862999 kernel: BIOS-provided physical RAM map: May 17 00:19:45.863006 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:19:45.863012 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:19:45.863018 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:19:45.863025 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 17 00:19:45.863031 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 17 00:19:45.863037 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:19:45.863045 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:19:45.863052 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:19:45.863064 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:19:45.863070 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:19:45.863112 kernel: NX (Execute Disable) protection: active May 17 00:19:45.863120 kernel: APIC: Static calls initialized May 17 00:19:45.863130 kernel: SMBIOS 2.8 present. May 17 00:19:45.863137 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 17 00:19:45.863143 kernel: Hypervisor detected: KVM May 17 00:19:45.863150 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:19:45.863157 kernel: kvm-clock: using sched offset of 2222188360 cycles May 17 00:19:45.863164 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:19:45.863171 kernel: tsc: Detected 2794.748 MHz processor May 17 00:19:45.863178 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:19:45.863185 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:19:45.863192 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 17 00:19:45.863202 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:19:45.863209 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:19:45.863215 kernel: Using GB pages for direct mapping May 17 00:19:45.863222 kernel: ACPI: Early table checksum verification disabled May 17 00:19:45.863229 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 17 00:19:45.863236 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:19:45.863243 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:19:45.863250 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:19:45.863259 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 17 00:19:45.863266 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:19:45.863273 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:19:45.863280 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:19:45.863287 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:19:45.863294 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 17 00:19:45.863301 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 17 00:19:45.863311 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 17 00:19:45.863320 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 17 00:19:45.863327 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 17 00:19:45.863335 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 17 00:19:45.863342 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 17 00:19:45.863348 kernel: No NUMA configuration found May 17 00:19:45.863355 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 17 00:19:45.863362 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 17 00:19:45.863372 kernel: Zone ranges: May 17 00:19:45.863379 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:19:45.863386 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 17 00:19:45.863393 kernel: Normal empty May 17 00:19:45.863400 kernel: Movable zone start for each node May 17 00:19:45.863407 kernel: Early memory node ranges May 17 00:19:45.863414 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:19:45.863421 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 17 00:19:45.863428 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 17 00:19:45.863437 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:19:45.863444 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:19:45.863451 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:19:45.863458 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:19:45.863465 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:19:45.863472 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:19:45.863479 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:19:45.863486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:19:45.863493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:19:45.863503 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:19:45.863510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:19:45.863517 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:19:45.863524 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:19:45.863531 kernel: TSC deadline timer available May 17 00:19:45.863538 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 17 00:19:45.863545 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:19:45.863552 kernel: kvm-guest: KVM setup pv remote TLB flush May 17 00:19:45.863559 kernel: kvm-guest: setup PV sched yield May 17 00:19:45.863566 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:19:45.863576 kernel: Booting paravirtualized kernel on KVM May 17 00:19:45.863583 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:19:45.863590 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 17 00:19:45.863597 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 17 00:19:45.863604 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 17 00:19:45.863611 kernel: pcpu-alloc: [0] 0 1 2 3 May 17 00:19:45.863618 kernel: kvm-guest: PV spinlocks enabled May 17 00:19:45.863625 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:19:45.863633 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:45.863643 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:19:45.863650 kernel: random: crng init done May 17 00:19:45.863658 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:19:45.863665 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:19:45.863672 kernel: Fallback order for Node 0: 0 May 17 00:19:45.863679 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 17 00:19:45.863686 kernel: Policy zone: DMA32 May 17 00:19:45.863693 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:19:45.863703 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 136900K reserved, 0K cma-reserved) May 17 00:19:45.863710 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 17 00:19:45.863717 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:19:45.863724 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:19:45.863731 kernel: Dynamic Preempt: voluntary May 17 00:19:45.863738 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:19:45.863746 kernel: rcu: RCU event tracing is enabled. May 17 00:19:45.863753 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 17 00:19:45.863761 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:19:45.863770 kernel: Rude variant of Tasks RCU enabled. May 17 00:19:45.863778 kernel: Tracing variant of Tasks RCU enabled. May 17 00:19:45.863785 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:19:45.863792 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 17 00:19:45.863799 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 17 00:19:45.863806 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:19:45.863813 kernel: Console: colour VGA+ 80x25 May 17 00:19:45.863820 kernel: printk: console [ttyS0] enabled May 17 00:19:45.863827 kernel: ACPI: Core revision 20230628 May 17 00:19:45.863836 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:19:45.863844 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:19:45.863851 kernel: x2apic enabled May 17 00:19:45.863858 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:19:45.863865 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 17 00:19:45.863872 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 17 00:19:45.863879 kernel: kvm-guest: setup PV IPIs May 17 00:19:45.863896 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:19:45.863903 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:19:45.863910 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 17 00:19:45.863918 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:19:45.863925 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:19:45.863935 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:19:45.863943 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:19:45.863950 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:19:45.863958 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:19:45.863965 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:19:45.863975 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:19:45.863982 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:19:45.863990 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:19:45.863997 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 17 00:19:45.864005 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 17 00:19:45.864013 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 17 00:19:45.864021 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:19:45.864028 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:19:45.864038 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:19:45.864045 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:19:45.864053 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 17 00:19:45.864067 kernel: Freeing SMP alternatives memory: 32K May 17 00:19:45.864091 kernel: pid_max: default: 32768 minimum: 301 May 17 00:19:45.864099 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:19:45.864107 kernel: landlock: Up and running. May 17 00:19:45.864114 kernel: SELinux: Initializing. May 17 00:19:45.864122 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:19:45.864132 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:19:45.864139 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:19:45.864147 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:19:45.864154 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:19:45.864162 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 17 00:19:45.864169 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:19:45.864176 kernel: ... version: 0 May 17 00:19:45.864184 kernel: ... bit width: 48 May 17 00:19:45.864191 kernel: ... generic registers: 6 May 17 00:19:45.864201 kernel: ... value mask: 0000ffffffffffff May 17 00:19:45.864209 kernel: ... max period: 00007fffffffffff May 17 00:19:45.864216 kernel: ... fixed-purpose events: 0 May 17 00:19:45.864223 kernel: ... event mask: 000000000000003f May 17 00:19:45.864231 kernel: signal: max sigframe size: 1776 May 17 00:19:45.864238 kernel: rcu: Hierarchical SRCU implementation. May 17 00:19:45.864246 kernel: rcu: Max phase no-delay instances is 400. May 17 00:19:45.864253 kernel: smp: Bringing up secondary CPUs ... May 17 00:19:45.864260 kernel: smpboot: x86: Booting SMP configuration: May 17 00:19:45.864270 kernel: .... node #0, CPUs: #1 #2 #3 May 17 00:19:45.864277 kernel: smp: Brought up 1 node, 4 CPUs May 17 00:19:45.864284 kernel: smpboot: Max logical packages: 1 May 17 00:19:45.864292 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 17 00:19:45.864299 kernel: devtmpfs: initialized May 17 00:19:45.864306 kernel: x86/mm: Memory block size: 128MB May 17 00:19:45.864314 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:19:45.864321 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 17 00:19:45.864328 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:19:45.864338 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:19:45.864346 kernel: audit: initializing netlink subsys (disabled) May 17 00:19:45.864353 kernel: audit: type=2000 audit(1747441185.857:1): state=initialized audit_enabled=0 res=1 May 17 00:19:45.864360 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:19:45.864368 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:19:45.864375 kernel: cpuidle: using governor menu May 17 00:19:45.864382 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:19:45.864390 kernel: dca service started, version 1.12.1 May 17 00:19:45.864397 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:19:45.864407 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 17 00:19:45.864415 kernel: PCI: Using configuration type 1 for base access May 17 00:19:45.864422 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:19:45.864430 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:19:45.864437 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:19:45.864445 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:19:45.864452 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:19:45.864459 kernel: ACPI: Added _OSI(Module Device) May 17 00:19:45.864467 kernel: ACPI: Added _OSI(Processor Device) May 17 00:19:45.864477 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:19:45.864484 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:19:45.864492 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:19:45.864499 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:19:45.864506 kernel: ACPI: Interpreter enabled May 17 00:19:45.864514 kernel: ACPI: PM: (supports S0 S3 S5) May 17 00:19:45.864521 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:19:45.864529 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:19:45.864536 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:19:45.864546 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:19:45.864553 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:19:45.864762 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:19:45.864937 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:19:45.865178 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:19:45.865192 kernel: PCI host bridge to bus 0000:00 May 17 00:19:45.865321 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:19:45.865436 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:19:45.865547 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:19:45.865656 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 17 00:19:45.865767 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:19:45.865875 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:19:45.865985 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:19:45.866155 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:19:45.866294 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 17 00:19:45.866416 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 17 00:19:45.866536 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 17 00:19:45.866657 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 17 00:19:45.866776 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:19:45.866906 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 17 00:19:45.867032 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 17 00:19:45.867175 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 17 00:19:45.867355 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 17 00:19:45.867499 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 17 00:19:45.867837 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:19:45.867978 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 17 00:19:45.868124 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 17 00:19:45.868265 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:19:45.868387 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 17 00:19:45.868507 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 17 00:19:45.868627 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 17 00:19:45.868749 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 17 00:19:45.868876 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:19:45.869036 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:19:45.869267 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:19:45.869390 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 17 00:19:45.869509 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 17 00:19:45.869638 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:19:45.869759 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:19:45.869769 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:19:45.869777 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:19:45.869789 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:19:45.869796 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:19:45.869804 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:19:45.869811 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:19:45.869819 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:19:45.869826 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:19:45.869834 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:19:45.869842 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:19:45.869849 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:19:45.869859 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:19:45.869867 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:19:45.869874 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:19:45.869882 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:19:45.869889 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:19:45.869897 kernel: iommu: Default domain type: Translated May 17 00:19:45.869904 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:19:45.869912 kernel: PCI: Using ACPI for IRQ routing May 17 00:19:45.869920 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:19:45.869945 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:19:45.869959 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 17 00:19:45.870139 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:19:45.870261 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:19:45.870379 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:19:45.870390 kernel: vgaarb: loaded May 17 00:19:45.870397 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:19:45.870405 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:19:45.870416 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:19:45.870424 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:19:45.870432 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:19:45.870439 kernel: pnp: PnP ACPI init May 17 00:19:45.870567 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:19:45.870578 kernel: pnp: PnP ACPI: found 6 devices May 17 00:19:45.870586 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:19:45.870593 kernel: NET: Registered PF_INET protocol family May 17 00:19:45.870604 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:19:45.870612 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:19:45.870619 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:19:45.870627 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:19:45.870634 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:19:45.870642 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:19:45.870649 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:19:45.870657 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:19:45.870664 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:19:45.870674 kernel: NET: Registered PF_XDP protocol family May 17 00:19:45.870786 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:19:45.870896 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:19:45.871010 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:19:45.871152 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 17 00:19:45.871264 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:19:45.871374 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:19:45.871384 kernel: PCI: CLS 0 bytes, default 64 May 17 00:19:45.871395 kernel: Initialise system trusted keyrings May 17 00:19:45.871402 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:19:45.871410 kernel: Key type asymmetric registered May 17 00:19:45.871417 kernel: Asymmetric key parser 'x509' registered May 17 00:19:45.871425 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:19:45.871432 kernel: io scheduler mq-deadline registered May 17 00:19:45.871440 kernel: io scheduler kyber registered May 17 00:19:45.871447 kernel: io scheduler bfq registered May 17 00:19:45.871455 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:19:45.871465 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:19:45.871473 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:19:45.871480 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 17 00:19:45.871488 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:19:45.871495 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:19:45.871503 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:19:45.871510 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:19:45.871518 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:19:45.871641 kernel: rtc_cmos 00:04: RTC can wake from S4 May 17 00:19:45.871655 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:19:45.871767 kernel: rtc_cmos 00:04: registered as rtc0 May 17 00:19:45.871882 kernel: rtc_cmos 00:04: setting system clock to 2025-05-17T00:19:45 UTC (1747441185) May 17 00:19:45.871995 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:19:45.872005 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:19:45.872012 kernel: NET: Registered PF_INET6 protocol family May 17 00:19:45.872020 kernel: Segment Routing with IPv6 May 17 00:19:45.872027 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:19:45.872038 kernel: NET: Registered PF_PACKET protocol family May 17 00:19:45.872046 kernel: Key type dns_resolver registered May 17 00:19:45.872053 kernel: IPI shorthand broadcast: enabled May 17 00:19:45.872069 kernel: sched_clock: Marking stable (579003486, 104260978)->(695649560, -12385096) May 17 00:19:45.872094 kernel: registered taskstats version 1 May 17 00:19:45.872102 kernel: Loading compiled-in X.509 certificates May 17 00:19:45.872110 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:19:45.872118 kernel: Key type .fscrypt registered May 17 00:19:45.872125 kernel: Key type fscrypt-provisioning registered May 17 00:19:45.872136 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:19:45.872143 kernel: ima: Allocated hash algorithm: sha1 May 17 00:19:45.872152 kernel: ima: No architecture policies found May 17 00:19:45.872162 kernel: clk: Disabling unused clocks May 17 00:19:45.872173 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:19:45.872183 kernel: Write protecting the kernel read-only data: 36864k May 17 00:19:45.872193 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:19:45.872202 kernel: Run /init as init process May 17 00:19:45.872210 kernel: with arguments: May 17 00:19:45.872220 kernel: /init May 17 00:19:45.872227 kernel: with environment: May 17 00:19:45.872234 kernel: HOME=/ May 17 00:19:45.872242 kernel: TERM=linux May 17 00:19:45.872249 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:19:45.872259 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:19:45.872268 systemd[1]: Detected virtualization kvm. May 17 00:19:45.872276 systemd[1]: Detected architecture x86-64. May 17 00:19:45.872286 systemd[1]: Running in initrd. May 17 00:19:45.872294 systemd[1]: No hostname configured, using default hostname. May 17 00:19:45.872302 systemd[1]: Hostname set to . May 17 00:19:45.872310 systemd[1]: Initializing machine ID from VM UUID. May 17 00:19:45.872318 systemd[1]: Queued start job for default target initrd.target. May 17 00:19:45.872326 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:19:45.872334 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:19:45.872342 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:19:45.872353 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:19:45.872373 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:19:45.872384 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:19:45.872394 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:19:45.872404 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:19:45.872413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:19:45.872421 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:19:45.872429 systemd[1]: Reached target paths.target - Path Units. May 17 00:19:45.872437 systemd[1]: Reached target slices.target - Slice Units. May 17 00:19:45.872446 systemd[1]: Reached target swap.target - Swaps. May 17 00:19:45.872454 systemd[1]: Reached target timers.target - Timer Units. May 17 00:19:45.872462 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:19:45.872470 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:19:45.872481 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:19:45.872489 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:19:45.872497 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:19:45.872506 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:19:45.872516 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:19:45.872524 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:19:45.872532 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:19:45.872540 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:19:45.872551 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:19:45.872559 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:19:45.872567 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:19:45.872575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:19:45.872584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:45.872592 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:19:45.872600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:19:45.872608 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:19:45.872637 systemd-journald[193]: Collecting audit messages is disabled. May 17 00:19:45.872657 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:19:45.872666 systemd-journald[193]: Journal started May 17 00:19:45.872686 systemd-journald[193]: Runtime Journal (/run/log/journal/16fd39be5a694caa8911e198b1b87346) is 6.0M, max 48.4M, 42.3M free. May 17 00:19:45.867866 systemd-modules-load[194]: Inserted module 'overlay' May 17 00:19:45.906901 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:19:45.906923 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:19:45.906935 kernel: Bridge firewalling registered May 17 00:19:45.894829 systemd-modules-load[194]: Inserted module 'br_netfilter' May 17 00:19:45.905599 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:19:45.907523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:45.910195 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:19:45.921249 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:19:45.922337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:19:45.926260 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:19:45.927535 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:19:45.940317 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:19:45.940816 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:19:45.941890 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:19:45.954300 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:19:45.954861 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:19:45.958731 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:19:45.974601 dracut-cmdline[230]: dracut-dracut-053 May 17 00:19:45.977899 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:19:45.987254 systemd-resolved[226]: Positive Trust Anchors: May 17 00:19:45.987267 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:19:45.987296 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:19:45.989704 systemd-resolved[226]: Defaulting to hostname 'linux'. May 17 00:19:45.990704 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:19:45.998954 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:19:46.066125 kernel: SCSI subsystem initialized May 17 00:19:46.076118 kernel: Loading iSCSI transport class v2.0-870. May 17 00:19:46.087110 kernel: iscsi: registered transport (tcp) May 17 00:19:46.108122 kernel: iscsi: registered transport (qla4xxx) May 17 00:19:46.108193 kernel: QLogic iSCSI HBA Driver May 17 00:19:46.160891 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:19:46.170217 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:19:46.201009 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:19:46.201121 kernel: device-mapper: uevent: version 1.0.3 May 17 00:19:46.201138 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:19:46.244134 kernel: raid6: avx2x4 gen() 30492 MB/s May 17 00:19:46.261122 kernel: raid6: avx2x2 gen() 30931 MB/s May 17 00:19:46.278207 kernel: raid6: avx2x1 gen() 25918 MB/s May 17 00:19:46.278299 kernel: raid6: using algorithm avx2x2 gen() 30931 MB/s May 17 00:19:46.296231 kernel: raid6: .... xor() 19846 MB/s, rmw enabled May 17 00:19:46.296321 kernel: raid6: using avx2x2 recovery algorithm May 17 00:19:46.318109 kernel: xor: automatically using best checksumming function avx May 17 00:19:46.482125 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:19:46.494962 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:19:46.508316 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:19:46.520830 systemd-udevd[412]: Using default interface naming scheme 'v255'. May 17 00:19:46.525447 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:19:46.536239 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:19:46.550100 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation May 17 00:19:46.583335 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:19:46.595251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:19:46.666419 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:19:46.672256 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:19:46.682380 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:19:46.686482 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:19:46.687863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:19:46.690315 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:19:46.702299 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:19:46.716328 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 17 00:19:46.721206 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:19:46.727965 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 17 00:19:46.729108 kernel: libata version 3.00 loaded. May 17 00:19:46.731094 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:19:46.731121 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:19:46.734897 kernel: GPT:9289727 != 19775487 May 17 00:19:46.734956 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:19:46.734980 kernel: GPT:9289727 != 19775487 May 17 00:19:46.734993 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:19:46.735007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:19:46.741949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:19:46.753987 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:19:46.754011 kernel: AES CTR mode by8 optimization enabled May 17 00:19:46.754022 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:19:46.754223 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:19:46.742134 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:19:46.757805 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:19:46.758064 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:19:46.743716 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:19:46.745821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:19:46.761618 kernel: scsi host0: ahci May 17 00:19:46.761793 kernel: scsi host1: ahci May 17 00:19:46.761941 kernel: scsi host2: ahci May 17 00:19:46.745937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:46.750614 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:46.761830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:46.764114 kernel: scsi host3: ahci May 17 00:19:46.765177 kernel: scsi host4: ahci May 17 00:19:46.767770 kernel: scsi host5: ahci May 17 00:19:46.767948 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 17 00:19:46.767960 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 17 00:19:46.768565 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 17 00:19:46.770384 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 17 00:19:46.770407 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 17 00:19:46.772734 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 17 00:19:46.780112 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) May 17 00:19:46.786099 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (469) May 17 00:19:46.789891 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 00:19:46.834791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:46.844193 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 00:19:46.849820 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:19:46.854641 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 00:19:46.855044 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 00:19:46.876249 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:19:46.878103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:19:46.901201 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:19:46.925823 disk-uuid[552]: Primary Header is updated. May 17 00:19:46.925823 disk-uuid[552]: Secondary Entries is updated. May 17 00:19:46.925823 disk-uuid[552]: Secondary Header is updated. May 17 00:19:46.930120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:19:46.936119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:19:47.085108 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:19:47.085163 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:19:47.085182 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:19:47.086115 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 17 00:19:47.087104 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:19:47.088114 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:19:47.089340 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:19:47.089352 kernel: ata3.00: applying bridge limits May 17 00:19:47.090102 kernel: ata3.00: configured for UDMA/100 May 17 00:19:47.091117 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:19:47.142654 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:19:47.142914 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:19:47.155095 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 17 00:19:47.938107 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:19:47.938234 disk-uuid[562]: The operation has completed successfully. May 17 00:19:47.960693 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:19:47.960817 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:19:47.993258 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:19:47.996410 sh[589]: Success May 17 00:19:48.009141 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:19:48.041491 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:19:48.050531 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:19:48.053703 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:19:48.067113 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:19:48.067151 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:19:48.068955 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:19:48.068969 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:19:48.069719 kernel: BTRFS info (device dm-0): using free space tree May 17 00:19:48.074201 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:19:48.075153 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:19:48.091202 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:19:48.093155 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:19:48.101497 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:19:48.101524 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:19:48.101535 kernel: BTRFS info (device vda6): using free space tree May 17 00:19:48.104103 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:19:48.113095 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:19:48.115253 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:19:48.123902 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:19:48.133266 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:19:48.186810 ignition[677]: Ignition 2.19.0 May 17 00:19:48.186821 ignition[677]: Stage: fetch-offline May 17 00:19:48.186865 ignition[677]: no configs at "/usr/lib/ignition/base.d" May 17 00:19:48.186876 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:19:48.186973 ignition[677]: parsed url from cmdline: "" May 17 00:19:48.186978 ignition[677]: no config URL provided May 17 00:19:48.186983 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:19:48.186993 ignition[677]: no config at "/usr/lib/ignition/user.ign" May 17 00:19:48.187029 ignition[677]: op(1): [started] loading QEMU firmware config module May 17 00:19:48.187034 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" May 17 00:19:48.195954 ignition[677]: op(1): [finished] loading QEMU firmware config module May 17 00:19:48.222346 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:19:48.230220 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:19:48.238255 ignition[677]: parsing config with SHA512: bb8c0375866714ccb582748b3e6e9d2d24e0b9ff62a3d5c9a9c9e10ff008210b96a90ec6adb2605b8889b9fef85300f7c4f0359e9c7bf9d85d86af9b96ef4d57 May 17 00:19:48.241436 unknown[677]: fetched base config from "system" May 17 00:19:48.241449 unknown[677]: fetched user config from "qemu" May 17 00:19:48.243333 ignition[677]: fetch-offline: fetch-offline passed May 17 00:19:48.244173 ignition[677]: Ignition finished successfully May 17 00:19:48.246652 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:19:48.251971 systemd-networkd[777]: lo: Link UP May 17 00:19:48.251981 systemd-networkd[777]: lo: Gained carrier May 17 00:19:48.254834 systemd-networkd[777]: Enumeration completed May 17 00:19:48.254932 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:19:48.255542 systemd[1]: Reached target network.target - Network. May 17 00:19:48.255805 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 00:19:48.261277 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:19:48.261285 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:19:48.263217 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:19:48.267200 systemd-networkd[777]: eth0: Link UP May 17 00:19:48.267210 systemd-networkd[777]: eth0: Gained carrier May 17 00:19:48.267218 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:19:48.276188 ignition[780]: Ignition 2.19.0 May 17 00:19:48.276206 ignition[780]: Stage: kargs May 17 00:19:48.276376 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 17 00:19:48.276388 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:19:48.277208 ignition[780]: kargs: kargs passed May 17 00:19:48.277248 ignition[780]: Ignition finished successfully May 17 00:19:48.280446 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:19:48.285154 systemd-networkd[777]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:19:48.287232 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:19:48.302175 ignition[787]: Ignition 2.19.0 May 17 00:19:48.302185 ignition[787]: Stage: disks May 17 00:19:48.302344 ignition[787]: no configs at "/usr/lib/ignition/base.d" May 17 00:19:48.302355 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:19:48.306235 ignition[787]: disks: disks passed May 17 00:19:48.306287 ignition[787]: Ignition finished successfully May 17 00:19:48.309266 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:19:48.311528 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:19:48.312701 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:19:48.314941 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:19:48.317297 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:19:48.319320 systemd[1]: Reached target basic.target - Basic System. May 17 00:19:48.333366 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:19:48.358847 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:19:48.419366 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:19:48.433207 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:19:48.518104 kernel: EXT4-fs (vda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:19:48.518888 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:19:48.520138 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:19:48.527239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:19:48.529191 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:19:48.530057 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:19:48.530132 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:19:48.537926 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807) May 17 00:19:48.530162 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:19:48.541696 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:19:48.541716 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:19:48.541727 kernel: BTRFS info (device vda6): using free space tree May 17 00:19:48.544118 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:19:48.546195 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:19:48.551110 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:19:48.553207 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:19:48.590620 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:19:48.595029 initrd-setup-root[838]: cut: /sysroot/etc/group: No such file or directory May 17 00:19:48.599634 initrd-setup-root[845]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:19:48.603837 initrd-setup-root[852]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:19:48.696835 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:19:48.708212 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:19:48.711118 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:19:48.717099 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:19:48.736201 ignition[920]: INFO : Ignition 2.19.0 May 17 00:19:48.736201 ignition[920]: INFO : Stage: mount May 17 00:19:48.738192 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:19:48.738192 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:19:48.738192 ignition[920]: INFO : mount: mount passed May 17 00:19:48.738192 ignition[920]: INFO : Ignition finished successfully May 17 00:19:48.744548 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:19:48.745915 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:19:48.756179 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:19:49.067896 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:19:49.085251 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:19:49.092096 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (934) May 17 00:19:49.094364 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:19:49.094379 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:19:49.094401 kernel: BTRFS info (device vda6): using free space tree May 17 00:19:49.098100 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:19:49.099668 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:19:49.127672 ignition[951]: INFO : Ignition 2.19.0 May 17 00:19:49.127672 ignition[951]: INFO : Stage: files May 17 00:19:49.129495 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:19:49.129495 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:19:49.129495 ignition[951]: DEBUG : files: compiled without relabeling support, skipping May 17 00:19:49.133212 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:19:49.133212 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:19:49.136196 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:19:49.136196 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:19:49.136196 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:19:49.136196 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:19:49.136196 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:19:49.134063 unknown[951]: wrote ssh authorized keys file for user: core May 17 00:19:49.293869 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:19:50.236019 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:19:50.238358 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:19:50.238358 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:19:50.238358 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:19:50.238358 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:19:50.238358 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:19:50.246971 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:19:50.246971 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:19:50.246971 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:19:50.247121 systemd-networkd[777]: eth0: Gained IPv6LL May 17 00:19:50.253311 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:19:50.255234 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:19:50.256932 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:19:50.259476 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:19:50.261933 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:19:50.264044 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:19:50.818861 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 17 00:19:51.039128 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:19:51.039128 ignition[951]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 17 00:19:51.042635 ignition[951]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:19:51.044690 ignition[951]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:19:51.044690 ignition[951]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 17 00:19:51.044690 ignition[951]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 17 00:19:51.044690 ignition[951]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:19:51.044690 ignition[951]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 17 00:19:51.044690 ignition[951]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 17 00:19:51.044690 ignition[951]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 17 00:19:51.064883 ignition[951]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:19:51.070231 ignition[951]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 17 00:19:51.071814 ignition[951]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 17 00:19:51.071814 ignition[951]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 17 00:19:51.071814 ignition[951]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:19:51.071814 ignition[951]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:19:51.071814 ignition[951]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:19:51.071814 ignition[951]: INFO : files: files passed May 17 00:19:51.071814 ignition[951]: INFO : Ignition finished successfully May 17 00:19:51.072890 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:19:51.082309 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:19:51.084446 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:19:51.086459 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:19:51.086574 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:19:51.094819 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory May 17 00:19:51.097833 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:19:51.097833 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:19:51.102236 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:19:51.100249 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:19:51.102711 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:19:51.114195 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:19:51.139115 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:19:51.139299 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:19:51.140072 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:19:51.144436 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:19:51.145025 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:19:51.148176 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:19:51.189562 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:19:51.204213 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:19:51.212619 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:19:51.212968 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:19:51.215176 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:19:51.215661 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:19:51.215772 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:19:51.220807 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:19:51.221604 systemd[1]: Stopped target basic.target - Basic System. May 17 00:19:51.221947 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:19:51.222472 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:19:51.222794 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:19:51.223145 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:19:51.223632 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:19:51.224001 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:19:51.224488 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:19:51.224815 systemd[1]: Stopped target swap.target - Swaps. May 17 00:19:51.225305 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:19:51.225412 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:19:51.226038 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:19:51.226548 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:19:51.226843 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:19:51.226931 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:19:51.227418 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:19:51.227580 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:19:51.250155 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:19:51.250323 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:19:51.253649 systemd[1]: Stopped target paths.target - Path Units. May 17 00:19:51.255836 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:19:51.261133 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:19:51.263921 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:19:51.265775 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:19:51.266422 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:19:51.266567 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:19:51.269519 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:19:51.269653 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:19:51.270875 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:19:51.271055 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:19:51.273673 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:19:51.273826 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:19:51.286285 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:19:51.286602 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:19:51.286749 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:19:51.289409 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:19:51.290682 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:19:51.290799 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:19:51.291116 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:19:51.291219 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:19:51.294777 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:19:51.294888 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:19:51.313393 ignition[1005]: INFO : Ignition 2.19.0 May 17 00:19:51.313393 ignition[1005]: INFO : Stage: umount May 17 00:19:51.315277 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:19:51.315277 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 17 00:19:51.317960 ignition[1005]: INFO : umount: umount passed May 17 00:19:51.318769 ignition[1005]: INFO : Ignition finished successfully May 17 00:19:51.321477 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:19:51.322061 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:19:51.322195 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:19:51.323464 systemd[1]: Stopped target network.target - Network. May 17 00:19:51.324730 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:19:51.324791 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:19:51.325109 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:19:51.325152 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:19:51.325601 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:19:51.325645 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:19:51.325927 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:19:51.325981 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:19:51.326583 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:19:51.333601 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:19:51.341177 systemd-networkd[777]: eth0: DHCPv6 lease lost May 17 00:19:51.341924 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:19:51.342129 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:19:51.344802 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:19:51.344991 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:19:51.347435 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:19:51.347554 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:19:51.354257 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:19:51.356244 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:19:51.356331 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:19:51.360135 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:19:51.360195 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:19:51.363346 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:19:51.364410 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:19:51.366566 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:19:51.366621 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:19:51.370306 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:19:51.386938 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:19:51.387203 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:19:51.389318 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:19:51.389486 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:19:51.391843 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:19:51.391926 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:19:51.393393 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:19:51.393434 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:19:51.396273 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:19:51.396334 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:19:51.398820 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:19:51.398880 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:19:51.401212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:19:51.401262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:19:51.413350 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:19:51.414470 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:19:51.415731 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:19:51.418121 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:19:51.418173 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:51.422882 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:19:51.424105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:19:51.536145 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:19:51.537462 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:19:51.540535 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:19:51.543264 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:19:51.544485 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:19:51.562504 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:19:51.569373 systemd[1]: Switching root. May 17 00:19:51.597447 systemd-journald[193]: Journal stopped May 17 00:19:52.779159 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 17 00:19:52.781499 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:19:52.781529 kernel: SELinux: policy capability open_perms=1 May 17 00:19:52.781541 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:19:52.781552 kernel: SELinux: policy capability always_check_network=0 May 17 00:19:52.781563 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:19:52.781574 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:19:52.781585 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:19:52.781596 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:19:52.781610 kernel: audit: type=1403 audit(1747441192.033:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:19:52.781631 systemd[1]: Successfully loaded SELinux policy in 40.197ms. May 17 00:19:52.781666 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.419ms. May 17 00:19:52.781680 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:19:52.781693 systemd[1]: Detected virtualization kvm. May 17 00:19:52.781705 systemd[1]: Detected architecture x86-64. May 17 00:19:52.781717 systemd[1]: Detected first boot. May 17 00:19:52.781729 systemd[1]: Initializing machine ID from VM UUID. May 17 00:19:52.781740 zram_generator::config[1051]: No configuration found. May 17 00:19:52.781756 systemd[1]: Populated /etc with preset unit settings. May 17 00:19:52.781768 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:19:52.781780 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:19:52.781792 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:19:52.781805 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:19:52.781817 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:19:52.781829 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:19:52.781841 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:19:52.781853 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:19:52.781867 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:19:52.781879 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:19:52.781890 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:19:52.781912 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:19:52.781924 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:19:52.781936 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:19:52.781950 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:19:52.781964 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:19:52.781980 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:19:52.781992 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:19:52.782004 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:19:52.782021 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:19:52.782033 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:19:52.782045 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:19:52.782057 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:19:52.782068 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:19:52.782100 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:19:52.782112 systemd[1]: Reached target slices.target - Slice Units. May 17 00:19:52.782124 systemd[1]: Reached target swap.target - Swaps. May 17 00:19:52.782135 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:19:52.782147 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:19:52.782159 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:19:52.782173 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:19:52.782184 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:19:52.782196 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:19:52.782210 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:19:52.782222 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:19:52.782234 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:19:52.782246 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:52.782258 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:19:52.782269 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:19:52.782281 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:19:52.782294 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:19:52.782306 systemd[1]: Reached target machines.target - Containers. May 17 00:19:52.782320 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:19:52.782332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:19:52.782344 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:19:52.782356 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:19:52.782368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:19:52.782380 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:19:52.782392 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:19:52.782404 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:19:52.782417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:19:52.782431 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:19:52.782443 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:19:52.782456 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:19:52.782467 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:19:52.782480 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:19:52.782491 kernel: loop: module loaded May 17 00:19:52.782503 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:19:52.782514 kernel: fuse: init (API version 7.39) May 17 00:19:52.782528 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:19:52.782540 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:19:52.782552 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:19:52.782564 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:19:52.782577 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:19:52.782588 systemd[1]: Stopped verity-setup.service. May 17 00:19:52.782601 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:52.782634 systemd-journald[1121]: Collecting audit messages is disabled. May 17 00:19:52.782661 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:19:52.782673 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:19:52.782685 systemd-journald[1121]: Journal started May 17 00:19:52.782713 systemd-journald[1121]: Runtime Journal (/run/log/journal/16fd39be5a694caa8911e198b1b87346) is 6.0M, max 48.4M, 42.3M free. May 17 00:19:52.545351 systemd[1]: Queued start job for default target multi-user.target. May 17 00:19:52.570404 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 00:19:52.570873 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:19:52.784093 kernel: ACPI: bus type drm_connector registered May 17 00:19:52.784115 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:19:52.786525 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:19:52.787647 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:19:52.788877 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:19:52.790129 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:19:52.791380 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:19:52.792825 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:19:52.794488 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:19:52.794667 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:19:52.796245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:19:52.796419 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:19:52.798050 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:19:52.798238 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:19:52.799607 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:19:52.799779 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:19:52.801329 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:19:52.801499 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:19:52.802985 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:19:52.803169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:19:52.804556 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:19:52.806084 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:19:52.807855 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:19:52.822796 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:19:52.833236 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:19:52.835733 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:19:52.836910 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:19:52.836943 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:19:52.838983 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:19:52.841431 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:19:52.843756 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:19:52.844982 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:19:52.848816 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:19:52.851217 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:19:52.852533 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:19:52.857286 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:19:52.858603 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:19:52.859999 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:19:52.869252 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:19:52.872057 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:19:52.875294 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:19:52.877108 systemd-journald[1121]: Time spent on flushing to /var/log/journal/16fd39be5a694caa8911e198b1b87346 is 28.928ms for 950 entries. May 17 00:19:52.877108 systemd-journald[1121]: System Journal (/var/log/journal/16fd39be5a694caa8911e198b1b87346) is 8.0M, max 195.6M, 187.6M free. May 17 00:19:52.917257 systemd-journald[1121]: Received client request to flush runtime journal. May 17 00:19:52.917293 kernel: loop0: detected capacity change from 0 to 140768 May 17 00:19:52.917307 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:19:52.880532 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:19:52.882225 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:19:52.891561 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:19:52.893658 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:19:52.901323 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:19:52.912271 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:19:52.919190 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:19:52.920961 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:19:52.922569 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:19:52.935108 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:19:52.938382 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:19:52.947643 kernel: loop1: detected capacity change from 0 to 142488 May 17 00:19:52.947256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:19:52.949299 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:19:52.949986 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:19:52.974945 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. May 17 00:19:52.974966 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. May 17 00:19:52.979174 kernel: loop2: detected capacity change from 0 to 221472 May 17 00:19:52.981762 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:19:53.016132 kernel: loop3: detected capacity change from 0 to 140768 May 17 00:19:53.031117 kernel: loop4: detected capacity change from 0 to 142488 May 17 00:19:53.042129 kernel: loop5: detected capacity change from 0 to 221472 May 17 00:19:53.049634 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 17 00:19:53.050245 (sd-merge)[1190]: Merged extensions into '/usr'. May 17 00:19:53.054665 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:19:53.054683 systemd[1]: Reloading... May 17 00:19:53.123104 zram_generator::config[1222]: No configuration found. May 17 00:19:53.185783 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:19:53.238379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:19:53.290447 systemd[1]: Reloading finished in 235 ms. May 17 00:19:53.323339 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:19:53.324901 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:19:53.338309 systemd[1]: Starting ensure-sysext.service... May 17 00:19:53.340700 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:19:53.346857 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... May 17 00:19:53.346874 systemd[1]: Reloading... May 17 00:19:53.363252 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:19:53.363605 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:19:53.364589 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:19:53.364872 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 17 00:19:53.364957 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. May 17 00:19:53.368728 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:19:53.368741 systemd-tmpfiles[1254]: Skipping /boot May 17 00:19:53.388794 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:19:53.391246 systemd-tmpfiles[1254]: Skipping /boot May 17 00:19:53.404117 zram_generator::config[1281]: No configuration found. May 17 00:19:53.513004 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:19:53.562208 systemd[1]: Reloading finished in 214 ms. May 17 00:19:53.582662 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:19:53.596090 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:19:53.605356 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:19:53.607919 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:19:53.610345 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:19:53.615199 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:19:53.619509 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:19:53.631378 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:19:53.635416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:53.635591 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:19:53.638540 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:19:53.651382 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:19:53.654430 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:19:53.656198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:19:53.657094 systemd-udevd[1326]: Using default interface naming scheme 'v255'. May 17 00:19:53.658561 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:19:53.661191 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:53.662560 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:19:53.664860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:19:53.665062 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:19:53.665849 augenrules[1345]: No rules May 17 00:19:53.667228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:19:53.667425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:19:53.669731 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:19:53.671729 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:19:53.671985 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:19:53.681899 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:19:53.697778 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:19:53.699828 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:19:53.700153 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:19:53.705262 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:19:53.707637 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:19:53.711714 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:19:53.721647 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:19:53.738377 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:19:53.738717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:53.738891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:19:53.742336 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:19:53.746383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:19:53.757240 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1363) May 17 00:19:53.753323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:19:53.757220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:19:53.757334 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:19:53.757408 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:53.758606 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:19:53.761111 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:19:53.761352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:19:53.764427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:19:53.764693 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:19:53.766686 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:19:53.766921 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:19:53.786528 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:53.786683 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:19:53.794218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:19:53.797334 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:19:53.799602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:19:53.809357 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:19:53.810570 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:19:53.810638 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:19:53.810660 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:19:53.811232 systemd[1]: Finished ensure-sysext.service. May 17 00:19:53.812449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:19:53.812646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:19:53.814528 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:19:53.815126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:19:53.817523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:19:53.827674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:19:53.832209 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:19:53.833157 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:19:53.833350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:19:53.840105 kernel: ACPI: button: Power Button [PWRF] May 17 00:19:53.849358 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:19:53.849765 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:19:53.851123 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:19:53.848927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:19:53.851916 systemd-networkd[1364]: lo: Link UP May 17 00:19:53.851927 systemd-networkd[1364]: lo: Gained carrier May 17 00:19:53.857062 systemd-networkd[1364]: Enumeration completed May 17 00:19:53.861420 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:19:53.862849 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:19:53.862912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:19:53.864354 systemd-resolved[1324]: Positive Trust Anchors: May 17 00:19:53.864372 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:19:53.864404 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:19:53.868246 systemd-resolved[1324]: Defaulting to hostname 'linux'. May 17 00:19:53.872299 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:19:53.876348 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:19:53.877599 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:19:53.878844 systemd[1]: Reached target network.target - Network. May 17 00:19:53.879810 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:19:53.889353 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:19:53.889364 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:19:53.890157 systemd-networkd[1364]: eth0: Link UP May 17 00:19:53.890168 systemd-networkd[1364]: eth0: Gained carrier May 17 00:19:53.890186 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:19:53.893262 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:19:53.905180 systemd-networkd[1364]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:19:53.916097 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:19:53.916107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:19:53.917701 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:19:53.978109 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:19:54.914623 systemd-timesyncd[1412]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 17 00:19:54.914949 systemd-timesyncd[1412]: Initial clock synchronization to Sat 2025-05-17 00:19:54.914529 UTC. May 17 00:19:54.915056 systemd-resolved[1324]: Clock change detected. Flushing caches. May 17 00:19:54.915406 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:19:54.915791 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:19:54.929197 kernel: kvm_amd: TSC scaling supported May 17 00:19:54.929233 kernel: kvm_amd: Nested Virtualization enabled May 17 00:19:54.929246 kernel: kvm_amd: Nested Paging enabled May 17 00:19:54.930357 kernel: kvm_amd: LBR virtualization supported May 17 00:19:54.930384 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 17 00:19:54.931814 kernel: kvm_amd: Virtual GIF supported May 17 00:19:54.952793 kernel: EDAC MC: Ver: 3.0.0 May 17 00:19:54.986260 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:19:54.991262 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:19:55.001036 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:19:55.011169 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:19:55.042908 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:19:55.044535 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:19:55.045712 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:19:55.046997 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:19:55.048349 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:19:55.049903 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:19:55.051095 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:19:55.052373 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:19:55.053640 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:19:55.053665 systemd[1]: Reached target paths.target - Path Units. May 17 00:19:55.054649 systemd[1]: Reached target timers.target - Timer Units. May 17 00:19:55.056416 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:19:55.059114 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:19:55.069507 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:19:55.072121 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:19:55.073734 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:19:55.074908 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:19:55.075881 systemd[1]: Reached target basic.target - Basic System. May 17 00:19:55.076848 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:19:55.076877 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:19:55.077869 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:19:55.079962 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:19:55.084918 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:19:55.089244 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:19:55.089944 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:19:55.091289 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:19:55.092476 jq[1433]: false May 17 00:19:55.092856 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:19:55.097866 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:19:55.104435 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:19:55.108937 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:19:55.110888 extend-filesystems[1434]: Found loop3 May 17 00:19:55.110888 extend-filesystems[1434]: Found loop4 May 17 00:19:55.110888 extend-filesystems[1434]: Found loop5 May 17 00:19:55.110888 extend-filesystems[1434]: Found sr0 May 17 00:19:55.110888 extend-filesystems[1434]: Found vda May 17 00:19:55.110888 extend-filesystems[1434]: Found vda1 May 17 00:19:55.110888 extend-filesystems[1434]: Found vda2 May 17 00:19:55.110888 extend-filesystems[1434]: Found vda3 May 17 00:19:55.110888 extend-filesystems[1434]: Found usr May 17 00:19:55.110888 extend-filesystems[1434]: Found vda4 May 17 00:19:55.110888 extend-filesystems[1434]: Found vda6 May 17 00:19:55.110888 extend-filesystems[1434]: Found vda7 May 17 00:19:55.110888 extend-filesystems[1434]: Found vda9 May 17 00:19:55.110888 extend-filesystems[1434]: Checking size of /dev/vda9 May 17 00:19:55.128844 extend-filesystems[1434]: Resized partition /dev/vda9 May 17 00:19:55.121982 dbus-daemon[1432]: [system] SELinux support is enabled May 17 00:19:55.118958 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:19:55.120414 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:19:55.120939 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:19:55.122040 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:19:55.124707 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:19:55.132568 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:19:55.137268 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:19:55.139789 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1372) May 17 00:19:55.141797 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:19:55.143800 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) May 17 00:19:55.143841 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:19:55.147972 jq[1451]: true May 17 00:19:55.144278 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:19:55.144534 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:19:55.156819 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 17 00:19:55.163196 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:19:55.163425 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:19:55.169978 update_engine[1450]: I20250517 00:19:55.169852 1450 main.cc:92] Flatcar Update Engine starting May 17 00:19:55.171242 update_engine[1450]: I20250517 00:19:55.171188 1450 update_check_scheduler.cc:74] Next update check in 8m2s May 17 00:19:55.180970 jq[1459]: true May 17 00:19:55.181351 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:19:55.195471 systemd[1]: Started update-engine.service - Update Engine. May 17 00:19:55.197248 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:19:55.197275 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:19:55.200710 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:19:55.200734 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:19:55.213067 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:19:55.259720 tar[1458]: linux-amd64/helm May 17 00:19:55.256954 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:19:55.260286 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:19:55.260315 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:19:55.260615 systemd-logind[1446]: New seat seat0. May 17 00:19:55.263694 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:19:55.271155 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:19:55.292878 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 17 00:19:55.296918 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:19:55.304020 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:19:55.313794 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:19:55.314041 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:19:55.321504 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:19:55.321504 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:19:55.321504 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 17 00:19:55.326035 extend-filesystems[1434]: Resized filesystem in /dev/vda9 May 17 00:19:55.327207 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:19:55.329176 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:19:55.329417 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:19:55.333780 bash[1485]: Updated "/home/core/.ssh/authorized_keys" May 17 00:19:55.333570 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:19:55.336258 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 17 00:19:55.340160 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:19:55.348284 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:19:55.351601 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:19:55.353447 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:19:55.436747 containerd[1460]: time="2025-05-17T00:19:55.436638709Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:19:55.463828 containerd[1460]: time="2025-05-17T00:19:55.463751404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:19:55.466274 containerd[1460]: time="2025-05-17T00:19:55.465875488Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:19:55.466274 containerd[1460]: time="2025-05-17T00:19:55.465947333Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:19:55.466274 containerd[1460]: time="2025-05-17T00:19:55.465976117Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:19:55.466274 containerd[1460]: time="2025-05-17T00:19:55.466227879Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:19:55.466404 containerd[1460]: time="2025-05-17T00:19:55.466258336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:19:55.466511 containerd[1460]: time="2025-05-17T00:19:55.466473811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:19:55.466511 containerd[1460]: time="2025-05-17T00:19:55.466506923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:19:55.466786 containerd[1460]: time="2025-05-17T00:19:55.466743697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:19:55.466786 containerd[1460]: time="2025-05-17T00:19:55.466779103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:19:55.466851 containerd[1460]: time="2025-05-17T00:19:55.466797127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:19:55.466851 containerd[1460]: time="2025-05-17T00:19:55.466811043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:19:55.466930 containerd[1460]: time="2025-05-17T00:19:55.466905070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:19:55.467185 containerd[1460]: time="2025-05-17T00:19:55.467150480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:19:55.467322 containerd[1460]: time="2025-05-17T00:19:55.467288268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:19:55.467322 containerd[1460]: time="2025-05-17T00:19:55.467309027Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:19:55.467428 containerd[1460]: time="2025-05-17T00:19:55.467404366Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:19:55.467496 containerd[1460]: time="2025-05-17T00:19:55.467472664Z" level=info msg="metadata content store policy set" policy=shared May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473096563Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473168448Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473187714Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473204666Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473218862Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473409260Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473692431Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473834407Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473850307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473865205Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473882036Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473903236Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473916832Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:19:55.475798 containerd[1460]: time="2025-05-17T00:19:55.473930918Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.473947900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.473966745Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.473982294Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.473993505Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.474013593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.474030174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.474042287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.474055662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.474067274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.474080709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.474093463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.474109012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.474125233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476090 containerd[1460]: time="2025-05-17T00:19:55.474143587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474165037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474180576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474194432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474220381Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474243134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474257791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474269924Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474330197Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474352248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474364682Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474378367Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474390340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474405679Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:19:55.476367 containerd[1460]: time="2025-05-17T00:19:55.474419695Z" level=info msg="NRI interface is disabled by configuration." May 17 00:19:55.476661 containerd[1460]: time="2025-05-17T00:19:55.474433220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.474720980Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.474809626Z" level=info msg="Connect containerd service" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.474846075Z" level=info msg="using legacy CRI server" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.474853018Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.474945231Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.475607713Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.475782191Z" level=info msg="Start subscribing containerd event" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.475828918Z" level=info msg="Start recovering state" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.475892417Z" level=info msg="Start event monitor" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.475914278Z" level=info msg="Start snapshots syncer" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.475924187Z" level=info msg="Start cni network conf syncer for default" May 17 00:19:55.476690 containerd[1460]: time="2025-05-17T00:19:55.475932382Z" level=info msg="Start streaming server" May 17 00:19:55.477231 containerd[1460]: time="2025-05-17T00:19:55.477211872Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:19:55.477430 containerd[1460]: time="2025-05-17T00:19:55.477411978Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:19:55.477587 containerd[1460]: time="2025-05-17T00:19:55.477572028Z" level=info msg="containerd successfully booted in 0.042185s" May 17 00:19:55.477672 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:19:55.653918 tar[1458]: linux-amd64/LICENSE May 17 00:19:55.653918 tar[1458]: linux-amd64/README.md May 17 00:19:55.668934 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:19:56.492918 systemd-networkd[1364]: eth0: Gained IPv6LL May 17 00:19:56.496045 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:19:56.497848 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:19:56.509961 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 17 00:19:56.512377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:19:56.514625 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:19:56.535972 systemd[1]: coreos-metadata.service: Deactivated successfully. May 17 00:19:56.536724 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 17 00:19:56.538570 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:19:56.541051 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:19:57.208414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:19:57.210151 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:19:57.211379 systemd[1]: Startup finished in 707ms (kernel) + 6.342s (initrd) + 4.282s (userspace) = 11.333s. May 17 00:19:57.238263 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:19:57.637354 kubelet[1546]: E0517 00:19:57.637220 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:19:57.641484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:19:57.641697 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:19:59.704979 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:19:59.706163 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:54952.service - OpenSSH per-connection server daemon (10.0.0.1:54952). May 17 00:19:59.756120 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 54952 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:19:59.758325 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:59.767282 systemd-logind[1446]: New session 1 of user core. May 17 00:19:59.768581 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:19:59.781080 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:19:59.793647 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:19:59.806161 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:19:59.809562 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:19:59.927383 systemd[1563]: Queued start job for default target default.target. May 17 00:19:59.943267 systemd[1563]: Created slice app.slice - User Application Slice. May 17 00:19:59.943296 systemd[1563]: Reached target paths.target - Paths. May 17 00:19:59.943310 systemd[1563]: Reached target timers.target - Timers. May 17 00:19:59.944912 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:19:59.957409 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:19:59.957583 systemd[1563]: Reached target sockets.target - Sockets. May 17 00:19:59.957602 systemd[1563]: Reached target basic.target - Basic System. May 17 00:19:59.957646 systemd[1563]: Reached target default.target - Main User Target. May 17 00:19:59.957689 systemd[1563]: Startup finished in 140ms. May 17 00:19:59.958129 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:19:59.959917 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:20:00.021546 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:54964.service - OpenSSH per-connection server daemon (10.0.0.1:54964). May 17 00:20:00.063464 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 54964 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:20:00.065296 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:00.069397 systemd-logind[1446]: New session 2 of user core. May 17 00:20:00.078988 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:20:00.133911 sshd[1574]: pam_unix(sshd:session): session closed for user core May 17 00:20:00.145001 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:54964.service: Deactivated successfully. May 17 00:20:00.146963 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:20:00.148409 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. May 17 00:20:00.159115 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:54966.service - OpenSSH per-connection server daemon (10.0.0.1:54966). May 17 00:20:00.160071 systemd-logind[1446]: Removed session 2. May 17 00:20:00.195262 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 54966 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:20:00.197017 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:00.201073 systemd-logind[1446]: New session 3 of user core. May 17 00:20:00.210966 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:20:00.260976 sshd[1581]: pam_unix(sshd:session): session closed for user core May 17 00:20:00.277012 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:54966.service: Deactivated successfully. May 17 00:20:00.278612 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:20:00.279995 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. May 17 00:20:00.291076 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:54976.service - OpenSSH per-connection server daemon (10.0.0.1:54976). May 17 00:20:00.292013 systemd-logind[1446]: Removed session 3. May 17 00:20:00.324670 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 54976 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:20:00.326459 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:00.330471 systemd-logind[1446]: New session 4 of user core. May 17 00:20:00.339937 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:20:00.395306 sshd[1588]: pam_unix(sshd:session): session closed for user core May 17 00:20:00.408822 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:54976.service: Deactivated successfully. May 17 00:20:00.410362 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:20:00.411687 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. May 17 00:20:00.412858 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:54992.service - OpenSSH per-connection server daemon (10.0.0.1:54992). May 17 00:20:00.413550 systemd-logind[1446]: Removed session 4. May 17 00:20:00.460920 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 54992 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:20:00.462523 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:00.466347 systemd-logind[1446]: New session 5 of user core. May 17 00:20:00.476948 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:20:00.536459 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:20:00.536881 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:00.557484 sudo[1598]: pam_unix(sudo:session): session closed for user root May 17 00:20:00.559637 sshd[1595]: pam_unix(sshd:session): session closed for user core May 17 00:20:00.571010 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:54992.service: Deactivated successfully. May 17 00:20:00.572934 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:20:00.574341 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. May 17 00:20:00.586202 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:55008.service - OpenSSH per-connection server daemon (10.0.0.1:55008). May 17 00:20:00.587098 systemd-logind[1446]: Removed session 5. May 17 00:20:00.621492 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 55008 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:20:00.623409 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:00.627640 systemd-logind[1446]: New session 6 of user core. May 17 00:20:00.638045 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:20:00.693385 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:20:00.693822 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:00.697909 sudo[1607]: pam_unix(sudo:session): session closed for user root May 17 00:20:00.704252 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:20:00.704654 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:00.730242 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:20:00.732545 auditctl[1610]: No rules May 17 00:20:00.733849 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:20:00.734138 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:20:00.736016 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:20:00.767291 augenrules[1628]: No rules May 17 00:20:00.769152 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:20:00.770760 sudo[1606]: pam_unix(sudo:session): session closed for user root May 17 00:20:00.772660 sshd[1603]: pam_unix(sshd:session): session closed for user core May 17 00:20:00.786443 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:55008.service: Deactivated successfully. May 17 00:20:00.788111 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:20:00.789669 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. May 17 00:20:00.790860 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:55024.service - OpenSSH per-connection server daemon (10.0.0.1:55024). May 17 00:20:00.791587 systemd-logind[1446]: Removed session 6. May 17 00:20:00.827704 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 55024 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:20:00.829245 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:20:00.833110 systemd-logind[1446]: New session 7 of user core. May 17 00:20:00.842890 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:20:00.896123 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:20:00.896482 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:20:01.181052 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:20:01.181232 (dockerd)[1659]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:20:01.465323 dockerd[1659]: time="2025-05-17T00:20:01.465155603Z" level=info msg="Starting up" May 17 00:20:02.153433 dockerd[1659]: time="2025-05-17T00:20:02.153355399Z" level=info msg="Loading containers: start." May 17 00:20:02.252789 kernel: Initializing XFRM netlink socket May 17 00:20:02.331159 systemd-networkd[1364]: docker0: Link UP May 17 00:20:02.358322 dockerd[1659]: time="2025-05-17T00:20:02.358283446Z" level=info msg="Loading containers: done." May 17 00:20:02.371525 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2208899628-merged.mount: Deactivated successfully. May 17 00:20:02.374429 dockerd[1659]: time="2025-05-17T00:20:02.374373055Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:20:02.374524 dockerd[1659]: time="2025-05-17T00:20:02.374499252Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:20:02.374625 dockerd[1659]: time="2025-05-17T00:20:02.374601133Z" level=info msg="Daemon has completed initialization" May 17 00:20:02.411738 dockerd[1659]: time="2025-05-17T00:20:02.411315248Z" level=info msg="API listen on /run/docker.sock" May 17 00:20:02.411533 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:20:03.202360 containerd[1460]: time="2025-05-17T00:20:03.202311842Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:20:04.152156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932211389.mount: Deactivated successfully. May 17 00:20:05.598242 containerd[1460]: time="2025-05-17T00:20:05.598189052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:05.641846 containerd[1460]: time="2025-05-17T00:20:05.641795025Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=28078845" May 17 00:20:05.691262 containerd[1460]: time="2025-05-17T00:20:05.691215904Z" level=info msg="ImageCreate event name:\"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:05.701417 containerd[1460]: time="2025-05-17T00:20:05.701374990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:05.702400 containerd[1460]: time="2025-05-17T00:20:05.702368403Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"28075645\" in 2.500008431s" May 17 00:20:05.702400 containerd[1460]: time="2025-05-17T00:20:05.702397057Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:20:05.702921 containerd[1460]: time="2025-05-17T00:20:05.702887928Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:20:07.003568 containerd[1460]: time="2025-05-17T00:20:07.003501086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:07.004382 containerd[1460]: time="2025-05-17T00:20:07.004304814Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=24713522" May 17 00:20:07.005627 containerd[1460]: time="2025-05-17T00:20:07.005591467Z" level=info msg="ImageCreate event name:\"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:07.008381 containerd[1460]: time="2025-05-17T00:20:07.008339572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:07.009384 containerd[1460]: time="2025-05-17T00:20:07.009349476Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"26315362\" in 1.306429749s" May 17 00:20:07.009384 containerd[1460]: time="2025-05-17T00:20:07.009382578Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:20:07.009864 containerd[1460]: time="2025-05-17T00:20:07.009842862Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:20:07.891975 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:20:07.900114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:08.621726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:08.626427 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:20:08.842195 containerd[1460]: time="2025-05-17T00:20:08.842120681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:08.842955 containerd[1460]: time="2025-05-17T00:20:08.842898931Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=18784311" May 17 00:20:08.844162 containerd[1460]: time="2025-05-17T00:20:08.844109582Z" level=info msg="ImageCreate event name:\"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:08.848413 containerd[1460]: time="2025-05-17T00:20:08.848366947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:08.849902 containerd[1460]: time="2025-05-17T00:20:08.849869866Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"20386169\" in 1.840001457s" May 17 00:20:08.849966 containerd[1460]: time="2025-05-17T00:20:08.849906535Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:20:08.851066 containerd[1460]: time="2025-05-17T00:20:08.850999635Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:20:08.853377 kubelet[1878]: E0517 00:20:08.853346 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:20:08.860337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:20:08.860548 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:20:10.423997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604357546.mount: Deactivated successfully. May 17 00:20:11.281184 containerd[1460]: time="2025-05-17T00:20:11.281106301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:11.284491 containerd[1460]: time="2025-05-17T00:20:11.284429354Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:20:11.285633 containerd[1460]: time="2025-05-17T00:20:11.285594960Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:11.287717 containerd[1460]: time="2025-05-17T00:20:11.287691092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:11.288373 containerd[1460]: time="2025-05-17T00:20:11.288314602Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 2.437284779s" May 17 00:20:11.288373 containerd[1460]: time="2025-05-17T00:20:11.288363934Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:20:11.288929 containerd[1460]: time="2025-05-17T00:20:11.288893808Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:20:11.840699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1362373550.mount: Deactivated successfully. May 17 00:20:12.585795 containerd[1460]: time="2025-05-17T00:20:12.585710886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:12.586655 containerd[1460]: time="2025-05-17T00:20:12.586609452Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 17 00:20:12.588029 containerd[1460]: time="2025-05-17T00:20:12.587988308Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:12.590835 containerd[1460]: time="2025-05-17T00:20:12.590809239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:12.591868 containerd[1460]: time="2025-05-17T00:20:12.591822941Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.302896421s" May 17 00:20:12.591916 containerd[1460]: time="2025-05-17T00:20:12.591871271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:20:12.592418 containerd[1460]: time="2025-05-17T00:20:12.592376299Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:20:13.118614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806827597.mount: Deactivated successfully. May 17 00:20:13.126057 containerd[1460]: time="2025-05-17T00:20:13.126006306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:13.127066 containerd[1460]: time="2025-05-17T00:20:13.126994269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 17 00:20:13.128276 containerd[1460]: time="2025-05-17T00:20:13.128239755Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:13.131472 containerd[1460]: time="2025-05-17T00:20:13.131419469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:13.132085 containerd[1460]: time="2025-05-17T00:20:13.132040023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 539.632456ms" May 17 00:20:13.132085 containerd[1460]: time="2025-05-17T00:20:13.132075971Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:20:13.132569 containerd[1460]: time="2025-05-17T00:20:13.132546443Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:20:13.639261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271339126.mount: Deactivated successfully. May 17 00:20:16.741927 containerd[1460]: time="2025-05-17T00:20:16.741867518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:16.742517 containerd[1460]: time="2025-05-17T00:20:16.742486820Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" May 17 00:20:16.745242 containerd[1460]: time="2025-05-17T00:20:16.745199969Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:16.748205 containerd[1460]: time="2025-05-17T00:20:16.748162616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:16.749161 containerd[1460]: time="2025-05-17T00:20:16.749124520Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.616487727s" May 17 00:20:16.749161 containerd[1460]: time="2025-05-17T00:20:16.749154356Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:20:18.998119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:20:19.007972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:19.020583 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:20:19.020699 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:20:19.021031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:19.023314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:19.049145 systemd[1]: Reloading requested from client PID 2039 ('systemctl') (unit session-7.scope)... May 17 00:20:19.049158 systemd[1]: Reloading... May 17 00:20:19.124795 zram_generator::config[2078]: No configuration found. May 17 00:20:19.356738 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:19.433846 systemd[1]: Reloading finished in 384 ms. May 17 00:20:19.482236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:19.485969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:19.486722 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:20:19.486976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:19.488510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:19.650737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:19.654959 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:20:19.697932 kubelet[2128]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:20:19.697932 kubelet[2128]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:20:19.697932 kubelet[2128]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:20:19.698310 kubelet[2128]: I0517 00:20:19.697989 2128 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:20:20.174666 kubelet[2128]: I0517 00:20:20.174618 2128 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:20:20.174666 kubelet[2128]: I0517 00:20:20.174650 2128 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:20:20.174992 kubelet[2128]: I0517 00:20:20.174959 2128 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:20:20.196556 kubelet[2128]: E0517 00:20:20.196508 2128 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:20.197864 kubelet[2128]: I0517 00:20:20.197824 2128 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:20:20.203691 kubelet[2128]: E0517 00:20:20.203655 2128 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:20:20.203691 kubelet[2128]: I0517 00:20:20.203689 2128 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:20:20.209911 kubelet[2128]: I0517 00:20:20.209884 2128 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:20:20.210433 kubelet[2128]: I0517 00:20:20.210406 2128 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:20:20.210582 kubelet[2128]: I0517 00:20:20.210547 2128 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:20:20.210738 kubelet[2128]: I0517 00:20:20.210572 2128 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:20:20.210844 kubelet[2128]: I0517 00:20:20.210751 2128 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:20:20.210844 kubelet[2128]: I0517 00:20:20.210774 2128 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:20:20.210893 kubelet[2128]: I0517 00:20:20.210884 2128 state_mem.go:36] "Initialized new in-memory state store" May 17 00:20:20.212800 kubelet[2128]: I0517 00:20:20.212756 2128 kubelet.go:408] "Attempting to sync node with API server" May 17 00:20:20.212800 kubelet[2128]: I0517 00:20:20.212794 2128 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:20:20.212878 kubelet[2128]: I0517 00:20:20.212832 2128 kubelet.go:314] "Adding apiserver pod source" May 17 00:20:20.212878 kubelet[2128]: I0517 00:20:20.212867 2128 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:20:20.273672 kubelet[2128]: I0517 00:20:20.273558 2128 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:20:20.274011 kubelet[2128]: I0517 00:20:20.273983 2128 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:20:20.274751 kubelet[2128]: W0517 00:20:20.274722 2128 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:20:20.275759 kubelet[2128]: W0517 00:20:20.275628 2128 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 17 00:20:20.275759 kubelet[2128]: E0517 00:20:20.275670 2128 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:20.275759 kubelet[2128]: W0517 00:20:20.275655 2128 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 17 00:20:20.275759 kubelet[2128]: E0517 00:20:20.275710 2128 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:20.276555 kubelet[2128]: I0517 00:20:20.276528 2128 server.go:1274] "Started kubelet" May 17 00:20:20.278137 kubelet[2128]: I0517 00:20:20.276665 2128 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:20:20.278514 kubelet[2128]: I0517 00:20:20.278493 2128 server.go:449] "Adding debug handlers to kubelet server" May 17 00:20:20.279345 kubelet[2128]: I0517 00:20:20.278650 2128 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:20:20.280988 kubelet[2128]: I0517 00:20:20.280316 2128 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:20:20.280988 kubelet[2128]: I0517 00:20:20.280541 2128 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:20:20.280988 kubelet[2128]: I0517 00:20:20.280657 2128 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:20:20.285235 kubelet[2128]: E0517 00:20:20.280909 2128 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184028881074c091 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:20:20.276502673 +0000 UTC m=+0.617818912,LastTimestamp:2025-05-17 00:20:20.276502673 +0000 UTC m=+0.617818912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:20:20.285235 kubelet[2128]: E0517 00:20:20.285015 2128 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:20:20.285235 kubelet[2128]: I0517 00:20:20.285044 2128 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:20:20.285388 kubelet[2128]: I0517 00:20:20.285266 2128 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:20:20.285472 kubelet[2128]: I0517 00:20:20.285450 2128 reconciler.go:26] "Reconciler: start to sync state" May 17 00:20:20.285721 kubelet[2128]: E0517 00:20:20.285690 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" May 17 00:20:20.286115 kubelet[2128]: W0517 00:20:20.286075 2128 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 17 00:20:20.286172 kubelet[2128]: E0517 00:20:20.286119 2128 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:20.286343 kubelet[2128]: I0517 00:20:20.286319 2128 factory.go:221] Registration of the systemd container factory successfully May 17 00:20:20.286587 kubelet[2128]: I0517 00:20:20.286396 2128 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:20:20.287080 kubelet[2128]: E0517 00:20:20.286975 2128 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:20:20.287553 kubelet[2128]: I0517 00:20:20.287530 2128 factory.go:221] Registration of the containerd container factory successfully May 17 00:20:20.299189 kubelet[2128]: I0517 00:20:20.299152 2128 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:20:20.301000 kubelet[2128]: I0517 00:20:20.300382 2128 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:20:20.301000 kubelet[2128]: I0517 00:20:20.300407 2128 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:20:20.301000 kubelet[2128]: I0517 00:20:20.300433 2128 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:20:20.301000 kubelet[2128]: E0517 00:20:20.300472 2128 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:20:20.301233 kubelet[2128]: W0517 00:20:20.301206 2128 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 17 00:20:20.301293 kubelet[2128]: E0517 00:20:20.301244 2128 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:20.301594 kubelet[2128]: I0517 00:20:20.301576 2128 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:20:20.301594 kubelet[2128]: I0517 00:20:20.301591 2128 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:20:20.301650 kubelet[2128]: I0517 00:20:20.301606 2128 state_mem.go:36] "Initialized new in-memory state store" May 17 00:20:20.385103 kubelet[2128]: E0517 00:20:20.385077 2128 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:20:20.401484 kubelet[2128]: E0517 00:20:20.401433 2128 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:20:20.485818 kubelet[2128]: E0517 00:20:20.485693 2128 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:20:20.486305 kubelet[2128]: E0517 00:20:20.486243 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" May 17 00:20:20.586701 kubelet[2128]: E0517 00:20:20.586623 2128 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:20:20.601864 kubelet[2128]: E0517 00:20:20.601810 2128 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:20:20.687627 kubelet[2128]: E0517 00:20:20.687577 2128 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:20:20.725189 kubelet[2128]: I0517 00:20:20.725096 2128 policy_none.go:49] "None policy: Start" May 17 00:20:20.726244 kubelet[2128]: I0517 00:20:20.726219 2128 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:20:20.726288 kubelet[2128]: I0517 00:20:20.726259 2128 state_mem.go:35] "Initializing new in-memory state store" May 17 00:20:20.735599 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:20:20.750832 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:20:20.762403 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:20:20.763969 kubelet[2128]: I0517 00:20:20.763925 2128 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:20:20.764176 kubelet[2128]: I0517 00:20:20.764151 2128 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:20:20.764216 kubelet[2128]: I0517 00:20:20.764166 2128 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:20:20.764649 kubelet[2128]: I0517 00:20:20.764411 2128 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:20:20.765583 kubelet[2128]: E0517 00:20:20.765560 2128 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 17 00:20:20.865562 kubelet[2128]: I0517 00:20:20.865508 2128 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:20:20.865914 kubelet[2128]: E0517 00:20:20.865875 2128 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 17 00:20:20.887526 kubelet[2128]: E0517 00:20:20.887471 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" May 17 00:20:21.010608 systemd[1]: Created slice kubepods-burstable-pod3b5f937c8c7173914fe1bae834bba338.slice - libcontainer container kubepods-burstable-pod3b5f937c8c7173914fe1bae834bba338.slice. May 17 00:20:21.037533 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 17 00:20:21.041880 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 17 00:20:21.067785 kubelet[2128]: I0517 00:20:21.067729 2128 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:20:21.068178 kubelet[2128]: E0517 00:20:21.068141 2128 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 17 00:20:21.089622 kubelet[2128]: I0517 00:20:21.089561 2128 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b5f937c8c7173914fe1bae834bba338-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b5f937c8c7173914fe1bae834bba338\") " pod="kube-system/kube-apiserver-localhost" May 17 00:20:21.089622 kubelet[2128]: I0517 00:20:21.089607 2128 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b5f937c8c7173914fe1bae834bba338-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b5f937c8c7173914fe1bae834bba338\") " pod="kube-system/kube-apiserver-localhost" May 17 00:20:21.089759 kubelet[2128]: I0517 00:20:21.089636 2128 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:20:21.089759 kubelet[2128]: I0517 00:20:21.089660 2128 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b5f937c8c7173914fe1bae834bba338-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b5f937c8c7173914fe1bae834bba338\") " pod="kube-system/kube-apiserver-localhost" May 17 00:20:21.089759 kubelet[2128]: I0517 00:20:21.089686 2128 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:20:21.089759 kubelet[2128]: I0517 00:20:21.089701 2128 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:20:21.089759 kubelet[2128]: I0517 00:20:21.089726 2128 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:20:21.089930 kubelet[2128]: I0517 00:20:21.089752 2128 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:20:21.089930 kubelet[2128]: I0517 00:20:21.089834 2128 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 17 00:20:21.143342 kubelet[2128]: W0517 00:20:21.143255 2128 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 17 00:20:21.143342 kubelet[2128]: E0517 00:20:21.143343 2128 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:21.146999 kubelet[2128]: W0517 00:20:21.146970 2128 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 17 00:20:21.147050 kubelet[2128]: E0517 00:20:21.147001 2128 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:21.231825 kubelet[2128]: W0517 00:20:21.231705 2128 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 17 00:20:21.231991 kubelet[2128]: E0517 00:20:21.231839 2128 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:21.335452 kubelet[2128]: E0517 00:20:21.335301 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:21.336153 containerd[1460]: time="2025-05-17T00:20:21.336099845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b5f937c8c7173914fe1bae834bba338,Namespace:kube-system,Attempt:0,}" May 17 00:20:21.340411 kubelet[2128]: E0517 00:20:21.340378 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:21.340840 containerd[1460]: time="2025-05-17T00:20:21.340803648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 17 00:20:21.346076 kubelet[2128]: E0517 00:20:21.346053 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:21.346414 containerd[1460]: time="2025-05-17T00:20:21.346318732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 17 00:20:21.454345 kubelet[2128]: W0517 00:20:21.454289 2128 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 17 00:20:21.454345 kubelet[2128]: E0517 00:20:21.454338 2128 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:21.470189 kubelet[2128]: I0517 00:20:21.470150 2128 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:20:21.470528 kubelet[2128]: E0517 00:20:21.470489 2128 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 17 00:20:21.688554 kubelet[2128]: E0517 00:20:21.688399 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="1.6s" May 17 00:20:21.905844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329828200.mount: Deactivated successfully. May 17 00:20:21.917067 containerd[1460]: time="2025-05-17T00:20:21.916999106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:21.918044 containerd[1460]: time="2025-05-17T00:20:21.918017496Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:21.918916 containerd[1460]: time="2025-05-17T00:20:21.918859034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:20:21.919895 containerd[1460]: time="2025-05-17T00:20:21.919843761Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:20:21.920782 containerd[1460]: time="2025-05-17T00:20:21.920703694Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:21.921654 containerd[1460]: time="2025-05-17T00:20:21.921616927Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:21.922518 containerd[1460]: time="2025-05-17T00:20:21.922482521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:20:21.924520 containerd[1460]: time="2025-05-17T00:20:21.924472995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:20:21.926091 containerd[1460]: time="2025-05-17T00:20:21.926040344Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 585.1636ms" May 17 00:20:21.927439 containerd[1460]: time="2025-05-17T00:20:21.927363526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.186898ms" May 17 00:20:21.931064 containerd[1460]: time="2025-05-17T00:20:21.931001340Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.628706ms" May 17 00:20:22.209716 containerd[1460]: time="2025-05-17T00:20:22.209603798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:22.209716 containerd[1460]: time="2025-05-17T00:20:22.209673750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:22.210482 containerd[1460]: time="2025-05-17T00:20:22.209692164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:22.210482 containerd[1460]: time="2025-05-17T00:20:22.210622609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:22.215520 containerd[1460]: time="2025-05-17T00:20:22.215209152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:22.215520 containerd[1460]: time="2025-05-17T00:20:22.215264977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:22.215520 containerd[1460]: time="2025-05-17T00:20:22.215279815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:22.215520 containerd[1460]: time="2025-05-17T00:20:22.215385744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:22.249028 containerd[1460]: time="2025-05-17T00:20:22.248898434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:22.249028 containerd[1460]: time="2025-05-17T00:20:22.248979225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:22.249207 containerd[1460]: time="2025-05-17T00:20:22.249019331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:22.249207 containerd[1460]: time="2025-05-17T00:20:22.249131291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:22.273338 kubelet[2128]: I0517 00:20:22.273294 2128 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:20:22.281592 kubelet[2128]: E0517 00:20:22.273727 2128 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 17 00:20:22.280046 systemd[1]: Started cri-containerd-630ff53ba41a6d50620b2eb6a1cf94e9e6dc2f7dd413c287558fff41dfba744d.scope - libcontainer container 630ff53ba41a6d50620b2eb6a1cf94e9e6dc2f7dd413c287558fff41dfba744d. May 17 00:20:22.353812 kubelet[2128]: E0517 00:20:22.352043 2128 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:22.366959 systemd[1]: Started cri-containerd-4093dfc964c7d9592ddb1f58ae04bb55187d177f9c10ed5d60da5b8fc592c5df.scope - libcontainer container 4093dfc964c7d9592ddb1f58ae04bb55187d177f9c10ed5d60da5b8fc592c5df. May 17 00:20:22.372896 systemd[1]: Started cri-containerd-ec864cd8670fc20d89e1804848871c79a9553b1c56343ae3f670b312bfdae76a.scope - libcontainer container ec864cd8670fc20d89e1804848871c79a9553b1c56343ae3f670b312bfdae76a. May 17 00:20:22.420559 containerd[1460]: time="2025-05-17T00:20:22.420495100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b5f937c8c7173914fe1bae834bba338,Namespace:kube-system,Attempt:0,} returns sandbox id \"630ff53ba41a6d50620b2eb6a1cf94e9e6dc2f7dd413c287558fff41dfba744d\"" May 17 00:20:22.422991 kubelet[2128]: E0517 00:20:22.422827 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:22.426734 containerd[1460]: time="2025-05-17T00:20:22.426693206Z" level=info msg="CreateContainer within sandbox \"630ff53ba41a6d50620b2eb6a1cf94e9e6dc2f7dd413c287558fff41dfba744d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:20:22.432374 containerd[1460]: time="2025-05-17T00:20:22.432230933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec864cd8670fc20d89e1804848871c79a9553b1c56343ae3f670b312bfdae76a\"" May 17 00:20:22.433620 kubelet[2128]: E0517 00:20:22.433218 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:22.434805 containerd[1460]: time="2025-05-17T00:20:22.434752343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"4093dfc964c7d9592ddb1f58ae04bb55187d177f9c10ed5d60da5b8fc592c5df\"" May 17 00:20:22.435963 kubelet[2128]: E0517 00:20:22.435935 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:22.437338 containerd[1460]: time="2025-05-17T00:20:22.437312124Z" level=info msg="CreateContainer within sandbox \"ec864cd8670fc20d89e1804848871c79a9553b1c56343ae3f670b312bfdae76a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:20:22.437724 containerd[1460]: time="2025-05-17T00:20:22.437690013Z" level=info msg="CreateContainer within sandbox \"4093dfc964c7d9592ddb1f58ae04bb55187d177f9c10ed5d60da5b8fc592c5df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:20:23.210441 kubelet[2128]: W0517 00:20:23.210373 2128 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 17 00:20:23.210671 kubelet[2128]: E0517 00:20:23.210450 2128 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:23.289504 kubelet[2128]: E0517 00:20:23.289444 2128 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="3.2s" May 17 00:20:23.441490 containerd[1460]: time="2025-05-17T00:20:23.441442696Z" level=info msg="CreateContainer within sandbox \"4093dfc964c7d9592ddb1f58ae04bb55187d177f9c10ed5d60da5b8fc592c5df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"55f3301977486507030fed639d0d6f91b9cb4e05ba9283713e0aeacd998c12c8\"" May 17 00:20:23.442218 containerd[1460]: time="2025-05-17T00:20:23.442194997Z" level=info msg="StartContainer for \"55f3301977486507030fed639d0d6f91b9cb4e05ba9283713e0aeacd998c12c8\"" May 17 00:20:23.442843 containerd[1460]: time="2025-05-17T00:20:23.442811524Z" level=info msg="CreateContainer within sandbox \"ec864cd8670fc20d89e1804848871c79a9553b1c56343ae3f670b312bfdae76a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"18a30fb0e167252846d46adc7b1bc332c03b24ca4f2d52658b239523f4da5856\"" May 17 00:20:23.443188 containerd[1460]: time="2025-05-17T00:20:23.443165608Z" level=info msg="StartContainer for \"18a30fb0e167252846d46adc7b1bc332c03b24ca4f2d52658b239523f4da5856\"" May 17 00:20:23.446139 containerd[1460]: time="2025-05-17T00:20:23.446099190Z" level=info msg="CreateContainer within sandbox \"630ff53ba41a6d50620b2eb6a1cf94e9e6dc2f7dd413c287558fff41dfba744d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"abbbf9af247858a21e7e63a6813d1f65ad0d5c879a9f206054b90b6612bd6f90\"" May 17 00:20:23.446588 containerd[1460]: time="2025-05-17T00:20:23.446564042Z" level=info msg="StartContainer for \"abbbf9af247858a21e7e63a6813d1f65ad0d5c879a9f206054b90b6612bd6f90\"" May 17 00:20:23.476940 systemd[1]: Started cri-containerd-18a30fb0e167252846d46adc7b1bc332c03b24ca4f2d52658b239523f4da5856.scope - libcontainer container 18a30fb0e167252846d46adc7b1bc332c03b24ca4f2d52658b239523f4da5856. May 17 00:20:23.485966 systemd[1]: Started cri-containerd-55f3301977486507030fed639d0d6f91b9cb4e05ba9283713e0aeacd998c12c8.scope - libcontainer container 55f3301977486507030fed639d0d6f91b9cb4e05ba9283713e0aeacd998c12c8. May 17 00:20:23.487897 systemd[1]: Started cri-containerd-abbbf9af247858a21e7e63a6813d1f65ad0d5c879a9f206054b90b6612bd6f90.scope - libcontainer container abbbf9af247858a21e7e63a6813d1f65ad0d5c879a9f206054b90b6612bd6f90. May 17 00:20:23.600199 kubelet[2128]: W0517 00:20:23.600126 2128 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 17 00:20:23.600323 kubelet[2128]: E0517 00:20:23.600207 2128 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.98:6443: connect: connection refused" logger="UnhandledError" May 17 00:20:23.657128 containerd[1460]: time="2025-05-17T00:20:23.657057027Z" level=info msg="StartContainer for \"18a30fb0e167252846d46adc7b1bc332c03b24ca4f2d52658b239523f4da5856\" returns successfully" May 17 00:20:23.657279 containerd[1460]: time="2025-05-17T00:20:23.657070723Z" level=info msg="StartContainer for \"abbbf9af247858a21e7e63a6813d1f65ad0d5c879a9f206054b90b6612bd6f90\" returns successfully" May 17 00:20:23.657279 containerd[1460]: time="2025-05-17T00:20:23.657080802Z" level=info msg="StartContainer for \"55f3301977486507030fed639d0d6f91b9cb4e05ba9283713e0aeacd998c12c8\" returns successfully" May 17 00:20:23.876595 kubelet[2128]: I0517 00:20:23.876126 2128 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:20:24.365879 kubelet[2128]: E0517 00:20:24.365648 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:24.371640 kubelet[2128]: E0517 00:20:24.371604 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:24.376026 kubelet[2128]: E0517 00:20:24.376001 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:24.799837 kubelet[2128]: I0517 00:20:24.799657 2128 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 17 00:20:24.799837 kubelet[2128]: E0517 00:20:24.799704 2128 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 17 00:20:24.802592 kubelet[2128]: E0517 00:20:24.802484 2128 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184028881074c091 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:20:20.276502673 +0000 UTC m=+0.617818912,LastTimestamp:2025-05-17 00:20:20.276502673 +0000 UTC m=+0.617818912,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:20:24.927704 kubelet[2128]: E0517 00:20:24.925889 2128 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1840288811144b27 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-17 00:20:20.286958375 +0000 UTC m=+0.628274604,LastTimestamp:2025-05-17 00:20:20.286958375 +0000 UTC m=+0.628274604,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 17 00:20:25.279361 kubelet[2128]: I0517 00:20:25.279173 2128 apiserver.go:52] "Watching apiserver" May 17 00:20:25.285472 kubelet[2128]: I0517 00:20:25.285438 2128 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:20:25.377465 kubelet[2128]: E0517 00:20:25.377429 2128 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 17 00:20:25.377465 kubelet[2128]: E0517 00:20:25.377441 2128 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 17 00:20:25.377949 kubelet[2128]: E0517 00:20:25.377516 2128 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 17 00:20:25.377949 kubelet[2128]: E0517 00:20:25.377588 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:25.377949 kubelet[2128]: E0517 00:20:25.377637 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:25.377949 kubelet[2128]: E0517 00:20:25.377638 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:26.402844 kubelet[2128]: E0517 00:20:26.402800 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:27.374308 kubelet[2128]: E0517 00:20:27.374275 2128 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:27.593640 systemd[1]: Reloading requested from client PID 2407 ('systemctl') (unit session-7.scope)... May 17 00:20:27.593655 systemd[1]: Reloading... May 17 00:20:27.677815 zram_generator::config[2446]: No configuration found. May 17 00:20:27.796974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:20:27.888666 systemd[1]: Reloading finished in 294 ms. May 17 00:20:27.932804 kubelet[2128]: I0517 00:20:27.932696 2128 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:20:27.932777 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:27.950260 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:20:27.950571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:27.950635 systemd[1]: kubelet.service: Consumed 1.321s CPU time, 134.3M memory peak, 0B memory swap peak. May 17 00:20:27.959318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:20:28.137585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:20:28.142425 (kubelet)[2491]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:20:28.183220 kubelet[2491]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:20:28.183220 kubelet[2491]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:20:28.183220 kubelet[2491]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:20:28.183604 kubelet[2491]: I0517 00:20:28.183302 2491 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:20:28.190321 kubelet[2491]: I0517 00:20:28.190287 2491 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:20:28.190321 kubelet[2491]: I0517 00:20:28.190310 2491 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:20:28.190560 kubelet[2491]: I0517 00:20:28.190537 2491 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:20:28.191752 kubelet[2491]: I0517 00:20:28.191725 2491 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:20:28.193538 kubelet[2491]: I0517 00:20:28.193508 2491 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:20:28.196975 kubelet[2491]: E0517 00:20:28.196938 2491 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:20:28.196975 kubelet[2491]: I0517 00:20:28.196969 2491 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:20:28.202942 kubelet[2491]: I0517 00:20:28.202912 2491 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:20:28.203041 kubelet[2491]: I0517 00:20:28.203027 2491 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:20:28.203198 kubelet[2491]: I0517 00:20:28.203169 2491 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:20:28.203391 kubelet[2491]: I0517 00:20:28.203194 2491 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:20:28.203477 kubelet[2491]: I0517 00:20:28.203402 2491 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:20:28.203477 kubelet[2491]: I0517 00:20:28.203410 2491 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:20:28.203477 kubelet[2491]: I0517 00:20:28.203435 2491 state_mem.go:36] "Initialized new in-memory state store" May 17 00:20:28.203541 kubelet[2491]: I0517 00:20:28.203530 2491 kubelet.go:408] "Attempting to sync node with API server" May 17 00:20:28.203541 kubelet[2491]: I0517 00:20:28.203540 2491 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:20:28.203584 kubelet[2491]: I0517 00:20:28.203570 2491 kubelet.go:314] "Adding apiserver pod source" May 17 00:20:28.203584 kubelet[2491]: I0517 00:20:28.203579 2491 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:20:28.204236 kubelet[2491]: I0517 00:20:28.204211 2491 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:20:28.204892 kubelet[2491]: I0517 00:20:28.204738 2491 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:20:28.205334 kubelet[2491]: I0517 00:20:28.205324 2491 server.go:1274] "Started kubelet" May 17 00:20:28.205884 kubelet[2491]: I0517 00:20:28.205859 2491 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:20:28.205986 kubelet[2491]: I0517 00:20:28.205965 2491 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:20:28.206330 kubelet[2491]: I0517 00:20:28.206316 2491 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:20:28.206917 kubelet[2491]: I0517 00:20:28.206899 2491 server.go:449] "Adding debug handlers to kubelet server" May 17 00:20:28.210031 kubelet[2491]: I0517 00:20:28.209997 2491 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:20:28.210822 kubelet[2491]: I0517 00:20:28.210393 2491 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:20:28.212183 kubelet[2491]: I0517 00:20:28.211672 2491 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:20:28.212183 kubelet[2491]: I0517 00:20:28.211977 2491 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:20:28.212348 kubelet[2491]: I0517 00:20:28.212326 2491 reconciler.go:26] "Reconciler: start to sync state" May 17 00:20:28.217861 kubelet[2491]: I0517 00:20:28.217832 2491 factory.go:221] Registration of the systemd container factory successfully May 17 00:20:28.217958 kubelet[2491]: I0517 00:20:28.217941 2491 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:20:28.218845 kubelet[2491]: E0517 00:20:28.218044 2491 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 17 00:20:28.222840 kubelet[2491]: E0517 00:20:28.222755 2491 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:20:28.223575 kubelet[2491]: I0517 00:20:28.223554 2491 factory.go:221] Registration of the containerd container factory successfully May 17 00:20:28.231684 kubelet[2491]: I0517 00:20:28.231621 2491 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:20:28.232975 kubelet[2491]: I0517 00:20:28.232954 2491 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:20:28.233020 kubelet[2491]: I0517 00:20:28.232985 2491 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:20:28.233020 kubelet[2491]: I0517 00:20:28.233007 2491 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:20:28.233334 kubelet[2491]: E0517 00:20:28.233057 2491 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:20:28.256642 kubelet[2491]: I0517 00:20:28.256614 2491 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:20:28.256882 kubelet[2491]: I0517 00:20:28.256849 2491 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:20:28.256882 kubelet[2491]: I0517 00:20:28.256876 2491 state_mem.go:36] "Initialized new in-memory state store" May 17 00:20:28.257056 kubelet[2491]: I0517 00:20:28.257039 2491 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:20:28.257083 kubelet[2491]: I0517 00:20:28.257052 2491 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:20:28.257083 kubelet[2491]: I0517 00:20:28.257070 2491 policy_none.go:49] "None policy: Start" May 17 00:20:28.257619 kubelet[2491]: I0517 00:20:28.257590 2491 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:20:28.257662 kubelet[2491]: I0517 00:20:28.257622 2491 state_mem.go:35] "Initializing new in-memory state store" May 17 00:20:28.257844 kubelet[2491]: I0517 00:20:28.257823 2491 state_mem.go:75] "Updated machine memory state" May 17 00:20:28.262384 kubelet[2491]: I0517 00:20:28.262347 2491 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:20:28.262705 kubelet[2491]: I0517 00:20:28.262545 2491 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:20:28.262705 kubelet[2491]: I0517 00:20:28.262559 2491 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:20:28.262705 kubelet[2491]: I0517 00:20:28.262706 2491 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:20:28.342143 kubelet[2491]: E0517 00:20:28.342095 2491 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 17 00:20:28.367623 kubelet[2491]: I0517 00:20:28.367597 2491 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 17 00:20:28.456328 kubelet[2491]: I0517 00:20:28.456187 2491 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 17 00:20:28.456328 kubelet[2491]: I0517 00:20:28.456280 2491 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 17 00:20:28.513967 kubelet[2491]: I0517 00:20:28.513891 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:20:28.513967 kubelet[2491]: I0517 00:20:28.513944 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:20:28.513967 kubelet[2491]: I0517 00:20:28.513965 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b5f937c8c7173914fe1bae834bba338-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b5f937c8c7173914fe1bae834bba338\") " pod="kube-system/kube-apiserver-localhost" May 17 00:20:28.513967 kubelet[2491]: I0517 00:20:28.513981 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b5f937c8c7173914fe1bae834bba338-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b5f937c8c7173914fe1bae834bba338\") " pod="kube-system/kube-apiserver-localhost" May 17 00:20:28.514245 kubelet[2491]: I0517 00:20:28.514000 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:20:28.514245 kubelet[2491]: I0517 00:20:28.514099 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:20:28.514245 kubelet[2491]: I0517 00:20:28.514140 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 17 00:20:28.514245 kubelet[2491]: I0517 00:20:28.514158 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 17 00:20:28.514245 kubelet[2491]: I0517 00:20:28.514176 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b5f937c8c7173914fe1bae834bba338-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b5f937c8c7173914fe1bae834bba338\") " pod="kube-system/kube-apiserver-localhost" May 17 00:20:28.641909 kubelet[2491]: E0517 00:20:28.641855 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:28.641909 kubelet[2491]: E0517 00:20:28.641880 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:28.643196 kubelet[2491]: E0517 00:20:28.643101 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:29.204075 kubelet[2491]: I0517 00:20:29.204031 2491 apiserver.go:52] "Watching apiserver" May 17 00:20:29.213144 kubelet[2491]: I0517 00:20:29.213091 2491 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:20:29.242915 kubelet[2491]: E0517 00:20:29.242414 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:29.242915 kubelet[2491]: E0517 00:20:29.242572 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:29.248760 kubelet[2491]: E0517 00:20:29.248734 2491 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 17 00:20:29.249160 kubelet[2491]: E0517 00:20:29.249090 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:29.270828 kubelet[2491]: I0517 00:20:29.269718 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.269699299 podStartE2EDuration="1.269699299s" podCreationTimestamp="2025-05-17 00:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:20:29.262790941 +0000 UTC m=+1.115439410" watchObservedRunningTime="2025-05-17 00:20:29.269699299 +0000 UTC m=+1.122347768" May 17 00:20:29.276445 kubelet[2491]: I0517 00:20:29.276121 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.276099163 podStartE2EDuration="3.276099163s" podCreationTimestamp="2025-05-17 00:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:20:29.2698104 +0000 UTC m=+1.122458869" watchObservedRunningTime="2025-05-17 00:20:29.276099163 +0000 UTC m=+1.128747632" May 17 00:20:29.276603 kubelet[2491]: I0517 00:20:29.276573 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.276566829 podStartE2EDuration="1.276566829s" podCreationTimestamp="2025-05-17 00:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:20:29.276494151 +0000 UTC m=+1.129142620" watchObservedRunningTime="2025-05-17 00:20:29.276566829 +0000 UTC m=+1.129215298" May 17 00:20:30.242836 kubelet[2491]: E0517 00:20:30.242752 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:30.243274 kubelet[2491]: E0517 00:20:30.242917 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:33.578672 kubelet[2491]: I0517 00:20:33.578635 2491 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:20:33.579121 containerd[1460]: time="2025-05-17T00:20:33.579041079Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:20:33.579354 kubelet[2491]: I0517 00:20:33.579278 2491 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:20:34.278393 systemd[1]: Created slice kubepods-besteffort-podf103701d_61ab_4ed5_8cbe_3a870a84a292.slice - libcontainer container kubepods-besteffort-podf103701d_61ab_4ed5_8cbe_3a870a84a292.slice. May 17 00:20:34.449055 kubelet[2491]: I0517 00:20:34.448990 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f103701d-61ab-4ed5-8cbe-3a870a84a292-kube-proxy\") pod \"kube-proxy-tjdvv\" (UID: \"f103701d-61ab-4ed5-8cbe-3a870a84a292\") " pod="kube-system/kube-proxy-tjdvv" May 17 00:20:34.449055 kubelet[2491]: I0517 00:20:34.449051 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f103701d-61ab-4ed5-8cbe-3a870a84a292-xtables-lock\") pod \"kube-proxy-tjdvv\" (UID: \"f103701d-61ab-4ed5-8cbe-3a870a84a292\") " pod="kube-system/kube-proxy-tjdvv" May 17 00:20:34.449259 kubelet[2491]: I0517 00:20:34.449092 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f103701d-61ab-4ed5-8cbe-3a870a84a292-lib-modules\") pod \"kube-proxy-tjdvv\" (UID: \"f103701d-61ab-4ed5-8cbe-3a870a84a292\") " pod="kube-system/kube-proxy-tjdvv" May 17 00:20:34.449259 kubelet[2491]: I0517 00:20:34.449126 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lpns\" (UniqueName: \"kubernetes.io/projected/f103701d-61ab-4ed5-8cbe-3a870a84a292-kube-api-access-4lpns\") pod \"kube-proxy-tjdvv\" (UID: \"f103701d-61ab-4ed5-8cbe-3a870a84a292\") " pod="kube-system/kube-proxy-tjdvv" May 17 00:20:34.591426 kubelet[2491]: E0517 00:20:34.591370 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:34.592111 containerd[1460]: time="2025-05-17T00:20:34.592074935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tjdvv,Uid:f103701d-61ab-4ed5-8cbe-3a870a84a292,Namespace:kube-system,Attempt:0,}" May 17 00:20:34.623175 containerd[1460]: time="2025-05-17T00:20:34.623026238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:34.623175 containerd[1460]: time="2025-05-17T00:20:34.623118422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:34.623175 containerd[1460]: time="2025-05-17T00:20:34.623136196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:34.624116 containerd[1460]: time="2025-05-17T00:20:34.623243668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:34.665931 systemd[1]: Started cri-containerd-7549b04cec38c747031966fb2d255f8f72469be8566db8a03b93d68d8b5316f2.scope - libcontainer container 7549b04cec38c747031966fb2d255f8f72469be8566db8a03b93d68d8b5316f2. May 17 00:20:34.689680 containerd[1460]: time="2025-05-17T00:20:34.689629759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tjdvv,Uid:f103701d-61ab-4ed5-8cbe-3a870a84a292,Namespace:kube-system,Attempt:0,} returns sandbox id \"7549b04cec38c747031966fb2d255f8f72469be8566db8a03b93d68d8b5316f2\"" May 17 00:20:34.690506 kubelet[2491]: E0517 00:20:34.690475 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:34.692368 containerd[1460]: time="2025-05-17T00:20:34.692340267Z" level=info msg="CreateContainer within sandbox \"7549b04cec38c747031966fb2d255f8f72469be8566db8a03b93d68d8b5316f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:20:34.708972 containerd[1460]: time="2025-05-17T00:20:34.708932377Z" level=info msg="CreateContainer within sandbox \"7549b04cec38c747031966fb2d255f8f72469be8566db8a03b93d68d8b5316f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"67d104f8998425c02333f8c2b66a6411f134283c31652b5de1db96931eda7e89\"" May 17 00:20:34.709381 containerd[1460]: time="2025-05-17T00:20:34.709355676Z" level=info msg="StartContainer for \"67d104f8998425c02333f8c2b66a6411f134283c31652b5de1db96931eda7e89\"" May 17 00:20:34.749017 systemd[1]: Started cri-containerd-67d104f8998425c02333f8c2b66a6411f134283c31652b5de1db96931eda7e89.scope - libcontainer container 67d104f8998425c02333f8c2b66a6411f134283c31652b5de1db96931eda7e89. May 17 00:20:34.752049 kubelet[2491]: I0517 00:20:34.752012 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq6k5\" (UniqueName: \"kubernetes.io/projected/b4ec5ad4-5d7e-452f-82c4-2a4e1bbc104e-kube-api-access-jq6k5\") pod \"tigera-operator-7c5755cdcb-kzrvs\" (UID: \"b4ec5ad4-5d7e-452f-82c4-2a4e1bbc104e\") " pod="tigera-operator/tigera-operator-7c5755cdcb-kzrvs" May 17 00:20:34.752125 kubelet[2491]: I0517 00:20:34.752046 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b4ec5ad4-5d7e-452f-82c4-2a4e1bbc104e-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-kzrvs\" (UID: \"b4ec5ad4-5d7e-452f-82c4-2a4e1bbc104e\") " pod="tigera-operator/tigera-operator-7c5755cdcb-kzrvs" May 17 00:20:34.754323 systemd[1]: Created slice kubepods-besteffort-podb4ec5ad4_5d7e_452f_82c4_2a4e1bbc104e.slice - libcontainer container kubepods-besteffort-podb4ec5ad4_5d7e_452f_82c4_2a4e1bbc104e.slice. May 17 00:20:34.779107 containerd[1460]: time="2025-05-17T00:20:34.779059942Z" level=info msg="StartContainer for \"67d104f8998425c02333f8c2b66a6411f134283c31652b5de1db96931eda7e89\" returns successfully" May 17 00:20:35.056916 containerd[1460]: time="2025-05-17T00:20:35.056816010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-kzrvs,Uid:b4ec5ad4-5d7e-452f-82c4-2a4e1bbc104e,Namespace:tigera-operator,Attempt:0,}" May 17 00:20:35.080047 containerd[1460]: time="2025-05-17T00:20:35.079937089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:35.080169 containerd[1460]: time="2025-05-17T00:20:35.080070722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:35.080169 containerd[1460]: time="2025-05-17T00:20:35.080126738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:35.080295 containerd[1460]: time="2025-05-17T00:20:35.080253226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:35.098940 systemd[1]: Started cri-containerd-6cc454f6846a08d540fa9eaefc15c9e2a53315b67d63c2eec79a1056ae52f862.scope - libcontainer container 6cc454f6846a08d540fa9eaefc15c9e2a53315b67d63c2eec79a1056ae52f862. May 17 00:20:35.136397 containerd[1460]: time="2025-05-17T00:20:35.136354292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-kzrvs,Uid:b4ec5ad4-5d7e-452f-82c4-2a4e1bbc104e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6cc454f6846a08d540fa9eaefc15c9e2a53315b67d63c2eec79a1056ae52f862\"" May 17 00:20:35.138705 containerd[1460]: time="2025-05-17T00:20:35.138646589Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 00:20:35.251196 kubelet[2491]: E0517 00:20:35.251164 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:35.562156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212672459.mount: Deactivated successfully. May 17 00:20:36.247363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3357023798.mount: Deactivated successfully. May 17 00:20:37.824742 containerd[1460]: time="2025-05-17T00:20:37.824670457Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:37.825515 containerd[1460]: time="2025-05-17T00:20:37.825454635Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=25055451" May 17 00:20:37.826639 containerd[1460]: time="2025-05-17T00:20:37.826614834Z" level=info msg="ImageCreate event name:\"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:37.828582 containerd[1460]: time="2025-05-17T00:20:37.828544403Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:37.829229 containerd[1460]: time="2025-05-17T00:20:37.829192706Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"25051446\" in 2.690513124s" May 17 00:20:37.829229 containerd[1460]: time="2025-05-17T00:20:37.829221971Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:5e43c1322619406528ff596056dfeb70cb8d20c5c00439feb752a7725302e033\"" May 17 00:20:37.831093 containerd[1460]: time="2025-05-17T00:20:37.831059296Z" level=info msg="CreateContainer within sandbox \"6cc454f6846a08d540fa9eaefc15c9e2a53315b67d63c2eec79a1056ae52f862\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 00:20:37.843013 containerd[1460]: time="2025-05-17T00:20:37.842970501Z" level=info msg="CreateContainer within sandbox \"6cc454f6846a08d540fa9eaefc15c9e2a53315b67d63c2eec79a1056ae52f862\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5be062f0a4ea05db897076eb263b0cbf95ab7c955d5fa1007db589295c338829\"" May 17 00:20:37.843410 containerd[1460]: time="2025-05-17T00:20:37.843389721Z" level=info msg="StartContainer for \"5be062f0a4ea05db897076eb263b0cbf95ab7c955d5fa1007db589295c338829\"" May 17 00:20:37.866136 kubelet[2491]: E0517 00:20:37.866107 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:37.870013 systemd[1]: Started cri-containerd-5be062f0a4ea05db897076eb263b0cbf95ab7c955d5fa1007db589295c338829.scope - libcontainer container 5be062f0a4ea05db897076eb263b0cbf95ab7c955d5fa1007db589295c338829. May 17 00:20:37.878531 kubelet[2491]: I0517 00:20:37.878477 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tjdvv" podStartSLOduration=3.878458481 podStartE2EDuration="3.878458481s" podCreationTimestamp="2025-05-17 00:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:20:35.259497266 +0000 UTC m=+7.112145735" watchObservedRunningTime="2025-05-17 00:20:37.878458481 +0000 UTC m=+9.731106950" May 17 00:20:37.900663 containerd[1460]: time="2025-05-17T00:20:37.900620970Z" level=info msg="StartContainer for \"5be062f0a4ea05db897076eb263b0cbf95ab7c955d5fa1007db589295c338829\" returns successfully" May 17 00:20:38.258943 kubelet[2491]: E0517 00:20:38.258895 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:38.266460 kubelet[2491]: I0517 00:20:38.266271 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-kzrvs" podStartSLOduration=1.574502589 podStartE2EDuration="4.266255962s" podCreationTimestamp="2025-05-17 00:20:34 +0000 UTC" firstStartedPulling="2025-05-17 00:20:35.138193835 +0000 UTC m=+6.990842304" lastFinishedPulling="2025-05-17 00:20:37.829947208 +0000 UTC m=+9.682595677" observedRunningTime="2025-05-17 00:20:38.266240733 +0000 UTC m=+10.118889202" watchObservedRunningTime="2025-05-17 00:20:38.266255962 +0000 UTC m=+10.118904432" May 17 00:20:38.611399 kubelet[2491]: E0517 00:20:38.611369 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:38.801536 kubelet[2491]: E0517 00:20:38.801489 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:39.261230 kubelet[2491]: E0517 00:20:39.260757 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:40.262751 kubelet[2491]: E0517 00:20:40.262407 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:40.879705 update_engine[1450]: I20250517 00:20:40.879618 1450 update_attempter.cc:509] Updating boot flags... May 17 00:20:40.907822 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2875) May 17 00:20:40.941814 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2879) May 17 00:20:40.998818 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2879) May 17 00:20:43.045840 sudo[1639]: pam_unix(sudo:session): session closed for user root May 17 00:20:43.047642 sshd[1636]: pam_unix(sshd:session): session closed for user core May 17 00:20:43.051711 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:55024.service: Deactivated successfully. May 17 00:20:43.054029 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:20:43.054277 systemd[1]: session-7.scope: Consumed 4.500s CPU time, 153.2M memory peak, 0B memory swap peak. May 17 00:20:43.055672 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. May 17 00:20:43.058456 systemd-logind[1446]: Removed session 7. May 17 00:20:46.207628 systemd[1]: Created slice kubepods-besteffort-pode2e07ea4_36ee_4f71_a369_e23f2371904c.slice - libcontainer container kubepods-besteffort-pode2e07ea4_36ee_4f71_a369_e23f2371904c.slice. May 17 00:20:46.264506 systemd[1]: Created slice kubepods-besteffort-pod1a5d8c8f_2a7c_40e2_9efa_ab0bfda121c9.slice - libcontainer container kubepods-besteffort-pod1a5d8c8f_2a7c_40e2_9efa_ab0bfda121c9.slice. May 17 00:20:46.330538 kubelet[2491]: I0517 00:20:46.330485 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e07ea4-36ee-4f71-a369-e23f2371904c-tigera-ca-bundle\") pod \"calico-typha-cd9659f5d-vpq2k\" (UID: \"e2e07ea4-36ee-4f71-a369-e23f2371904c\") " pod="calico-system/calico-typha-cd9659f5d-vpq2k" May 17 00:20:46.330538 kubelet[2491]: I0517 00:20:46.330527 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e2e07ea4-36ee-4f71-a369-e23f2371904c-typha-certs\") pod \"calico-typha-cd9659f5d-vpq2k\" (UID: \"e2e07ea4-36ee-4f71-a369-e23f2371904c\") " pod="calico-system/calico-typha-cd9659f5d-vpq2k" May 17 00:20:46.331021 kubelet[2491]: I0517 00:20:46.330566 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9m49\" (UniqueName: \"kubernetes.io/projected/e2e07ea4-36ee-4f71-a369-e23f2371904c-kube-api-access-c9m49\") pod \"calico-typha-cd9659f5d-vpq2k\" (UID: \"e2e07ea4-36ee-4f71-a369-e23f2371904c\") " pod="calico-system/calico-typha-cd9659f5d-vpq2k" May 17 00:20:46.431689 kubelet[2491]: I0517 00:20:46.431621 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-cni-bin-dir\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.431689 kubelet[2491]: I0517 00:20:46.431668 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54kh\" (UniqueName: \"kubernetes.io/projected/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-kube-api-access-x54kh\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.431931 kubelet[2491]: I0517 00:20:46.431712 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-var-run-calico\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.431931 kubelet[2491]: I0517 00:20:46.431742 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-tigera-ca-bundle\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.431931 kubelet[2491]: I0517 00:20:46.431757 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-xtables-lock\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.431931 kubelet[2491]: I0517 00:20:46.431804 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-node-certs\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.431931 kubelet[2491]: I0517 00:20:46.431821 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-cni-net-dir\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.432099 kubelet[2491]: I0517 00:20:46.431840 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-flexvol-driver-host\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.432099 kubelet[2491]: I0517 00:20:46.431854 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-lib-modules\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.432099 kubelet[2491]: I0517 00:20:46.431868 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-var-lib-calico\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.432099 kubelet[2491]: I0517 00:20:46.431885 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-cni-log-dir\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.432099 kubelet[2491]: I0517 00:20:46.431900 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9-policysync\") pod \"calico-node-lmp7t\" (UID: \"1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9\") " pod="calico-system/calico-node-lmp7t" May 17 00:20:46.456629 kubelet[2491]: E0517 00:20:46.456578 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zkcv" podUID="7495b3bd-a626-4600-9f8e-cc5963e6df5a" May 17 00:20:46.519248 kubelet[2491]: E0517 00:20:46.519122 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:46.519977 containerd[1460]: time="2025-05-17T00:20:46.519636672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cd9659f5d-vpq2k,Uid:e2e07ea4-36ee-4f71-a369-e23f2371904c,Namespace:calico-system,Attempt:0,}" May 17 00:20:46.532259 kubelet[2491]: I0517 00:20:46.532222 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7495b3bd-a626-4600-9f8e-cc5963e6df5a-socket-dir\") pod \"csi-node-driver-4zkcv\" (UID: \"7495b3bd-a626-4600-9f8e-cc5963e6df5a\") " pod="calico-system/csi-node-driver-4zkcv" May 17 00:20:46.532259 kubelet[2491]: I0517 00:20:46.532257 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7495b3bd-a626-4600-9f8e-cc5963e6df5a-varrun\") pod \"csi-node-driver-4zkcv\" (UID: \"7495b3bd-a626-4600-9f8e-cc5963e6df5a\") " pod="calico-system/csi-node-driver-4zkcv" May 17 00:20:46.532347 kubelet[2491]: I0517 00:20:46.532330 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7495b3bd-a626-4600-9f8e-cc5963e6df5a-registration-dir\") pod \"csi-node-driver-4zkcv\" (UID: \"7495b3bd-a626-4600-9f8e-cc5963e6df5a\") " pod="calico-system/csi-node-driver-4zkcv" May 17 00:20:46.532375 kubelet[2491]: I0517 00:20:46.532346 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4svs\" (UniqueName: \"kubernetes.io/projected/7495b3bd-a626-4600-9f8e-cc5963e6df5a-kube-api-access-j4svs\") pod \"csi-node-driver-4zkcv\" (UID: \"7495b3bd-a626-4600-9f8e-cc5963e6df5a\") " pod="calico-system/csi-node-driver-4zkcv" May 17 00:20:46.532402 kubelet[2491]: I0517 00:20:46.532387 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7495b3bd-a626-4600-9f8e-cc5963e6df5a-kubelet-dir\") pod \"csi-node-driver-4zkcv\" (UID: \"7495b3bd-a626-4600-9f8e-cc5963e6df5a\") " pod="calico-system/csi-node-driver-4zkcv" May 17 00:20:46.538946 kubelet[2491]: E0517 00:20:46.538882 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.538946 kubelet[2491]: W0517 00:20:46.538901 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.538946 kubelet[2491]: E0517 00:20:46.538919 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.542936 kubelet[2491]: E0517 00:20:46.542923 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.543028 kubelet[2491]: W0517 00:20:46.542994 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.543028 kubelet[2491]: E0517 00:20:46.543009 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.554650 containerd[1460]: time="2025-05-17T00:20:46.554572783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:46.554650 containerd[1460]: time="2025-05-17T00:20:46.554621315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:46.554650 containerd[1460]: time="2025-05-17T00:20:46.554632025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:46.554827 containerd[1460]: time="2025-05-17T00:20:46.554718066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:46.568499 containerd[1460]: time="2025-05-17T00:20:46.568459310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lmp7t,Uid:1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9,Namespace:calico-system,Attempt:0,}" May 17 00:20:46.573941 systemd[1]: Started cri-containerd-b1090b34c454c1c1bc9f8151e1d885d1fde5963f2b9cfea0d346fcae308dbcdb.scope - libcontainer container b1090b34c454c1c1bc9f8151e1d885d1fde5963f2b9cfea0d346fcae308dbcdb. May 17 00:20:46.594395 containerd[1460]: time="2025-05-17T00:20:46.594261336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:20:46.594395 containerd[1460]: time="2025-05-17T00:20:46.594360903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:20:46.594395 containerd[1460]: time="2025-05-17T00:20:46.594379418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:46.594599 containerd[1460]: time="2025-05-17T00:20:46.594549398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:20:46.616035 systemd[1]: Started cri-containerd-1566baabc9ba4f33a4b138f6c68dd5aa2332c7d5ae303fe0cbb5a662334ec4f2.scope - libcontainer container 1566baabc9ba4f33a4b138f6c68dd5aa2332c7d5ae303fe0cbb5a662334ec4f2. May 17 00:20:46.626513 containerd[1460]: time="2025-05-17T00:20:46.626138642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cd9659f5d-vpq2k,Uid:e2e07ea4-36ee-4f71-a369-e23f2371904c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1090b34c454c1c1bc9f8151e1d885d1fde5963f2b9cfea0d346fcae308dbcdb\"" May 17 00:20:46.627385 kubelet[2491]: E0517 00:20:46.627014 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:46.629258 containerd[1460]: time="2025-05-17T00:20:46.629227293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 00:20:46.635995 kubelet[2491]: E0517 00:20:46.635758 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.635995 kubelet[2491]: W0517 00:20:46.635796 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.635995 kubelet[2491]: E0517 00:20:46.635816 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.639213 kubelet[2491]: E0517 00:20:46.639104 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.639213 kubelet[2491]: W0517 00:20:46.639118 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.639213 kubelet[2491]: E0517 00:20:46.639149 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.639638 kubelet[2491]: E0517 00:20:46.639553 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.639638 kubelet[2491]: W0517 00:20:46.639564 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.639638 kubelet[2491]: E0517 00:20:46.639617 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.641883 kubelet[2491]: E0517 00:20:46.641790 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.641883 kubelet[2491]: W0517 00:20:46.641802 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.641883 kubelet[2491]: E0517 00:20:46.641856 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.642253 kubelet[2491]: E0517 00:20:46.642190 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.642253 kubelet[2491]: W0517 00:20:46.642200 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.642422 kubelet[2491]: E0517 00:20:46.642339 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.642834 kubelet[2491]: E0517 00:20:46.642822 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.644835 kubelet[2491]: W0517 00:20:46.644820 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.644942 kubelet[2491]: E0517 00:20:46.644891 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.645324 kubelet[2491]: E0517 00:20:46.645266 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.645324 kubelet[2491]: W0517 00:20:46.645276 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.645453 kubelet[2491]: E0517 00:20:46.645357 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.645720 kubelet[2491]: E0517 00:20:46.645644 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.645720 kubelet[2491]: W0517 00:20:46.645654 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.646010 kubelet[2491]: E0517 00:20:46.645817 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.646868 kubelet[2491]: E0517 00:20:46.646857 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.646927 kubelet[2491]: W0517 00:20:46.646916 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.647043 kubelet[2491]: E0517 00:20:46.647010 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.647865 kubelet[2491]: E0517 00:20:46.647664 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.647865 kubelet[2491]: W0517 00:20:46.647675 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.649986 kubelet[2491]: E0517 00:20:46.649898 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.649986 kubelet[2491]: W0517 00:20:46.649911 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.650164 kubelet[2491]: E0517 00:20:46.650152 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.650286 kubelet[2491]: W0517 00:20:46.650211 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.650645 kubelet[2491]: E0517 00:20:46.650565 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.650645 kubelet[2491]: W0517 00:20:46.650576 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.653408 kubelet[2491]: E0517 00:20:46.650857 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.653408 kubelet[2491]: W0517 00:20:46.650867 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.653408 kubelet[2491]: E0517 00:20:46.650878 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.653408 kubelet[2491]: E0517 00:20:46.651240 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.653408 kubelet[2491]: W0517 00:20:46.651265 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.653408 kubelet[2491]: E0517 00:20:46.651291 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.653408 kubelet[2491]: E0517 00:20:46.651330 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.653408 kubelet[2491]: E0517 00:20:46.651892 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.653408 kubelet[2491]: E0517 00:20:46.651941 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.653408 kubelet[2491]: E0517 00:20:46.651957 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.653651 kubelet[2491]: E0517 00:20:46.652908 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.653651 kubelet[2491]: W0517 00:20:46.652918 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.653651 kubelet[2491]: E0517 00:20:46.652931 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.658172 kubelet[2491]: E0517 00:20:46.657996 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.658172 kubelet[2491]: W0517 00:20:46.658018 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.658172 kubelet[2491]: E0517 00:20:46.658146 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.658792 kubelet[2491]: E0517 00:20:46.658395 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.658792 kubelet[2491]: W0517 00:20:46.658407 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.658792 kubelet[2491]: E0517 00:20:46.658506 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.658792 kubelet[2491]: E0517 00:20:46.658686 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.658792 kubelet[2491]: W0517 00:20:46.658697 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.658915 kubelet[2491]: E0517 00:20:46.658805 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.658915 kubelet[2491]: E0517 00:20:46.658903 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.658915 kubelet[2491]: W0517 00:20:46.658910 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.661034 kubelet[2491]: E0517 00:20:46.659000 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.661034 kubelet[2491]: E0517 00:20:46.659095 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.661034 kubelet[2491]: W0517 00:20:46.659102 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.661034 kubelet[2491]: E0517 00:20:46.659116 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.661034 kubelet[2491]: E0517 00:20:46.659354 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.661034 kubelet[2491]: W0517 00:20:46.659363 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.661034 kubelet[2491]: E0517 00:20:46.659382 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.661034 kubelet[2491]: E0517 00:20:46.659609 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.661034 kubelet[2491]: W0517 00:20:46.659617 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.661034 kubelet[2491]: E0517 00:20:46.659639 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.662786 kubelet[2491]: E0517 00:20:46.662297 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.662786 kubelet[2491]: W0517 00:20:46.662311 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.662786 kubelet[2491]: E0517 00:20:46.662324 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.662786 kubelet[2491]: E0517 00:20:46.662586 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.662786 kubelet[2491]: W0517 00:20:46.662595 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.662786 kubelet[2491]: E0517 00:20:46.662604 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.677968 kubelet[2491]: E0517 00:20:46.677852 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:46.677968 kubelet[2491]: W0517 00:20:46.677881 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:46.677968 kubelet[2491]: E0517 00:20:46.677903 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:46.684262 containerd[1460]: time="2025-05-17T00:20:46.684202387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lmp7t,Uid:1a5d8c8f-2a7c-40e2-9efa-ab0bfda121c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"1566baabc9ba4f33a4b138f6c68dd5aa2332c7d5ae303fe0cbb5a662334ec4f2\"" May 17 00:20:48.233953 kubelet[2491]: E0517 00:20:48.233886 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zkcv" podUID="7495b3bd-a626-4600-9f8e-cc5963e6df5a" May 17 00:20:48.717824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085607458.mount: Deactivated successfully. May 17 00:20:49.580241 containerd[1460]: time="2025-05-17T00:20:49.580187438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:49.583247 containerd[1460]: time="2025-05-17T00:20:49.583190225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=35158669" May 17 00:20:49.584728 containerd[1460]: time="2025-05-17T00:20:49.584700705Z" level=info msg="ImageCreate event name:\"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:49.589636 containerd[1460]: time="2025-05-17T00:20:49.589589858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:49.590088 containerd[1460]: time="2025-05-17T00:20:49.590045926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"35158523\" in 2.960789457s" May 17 00:20:49.590088 containerd[1460]: time="2025-05-17T00:20:49.590083187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:71be0570e8645ac646675719e0da6ac33a05810991b31aecc303e7add70933be\"" May 17 00:20:49.590892 containerd[1460]: time="2025-05-17T00:20:49.590869004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:20:49.608998 containerd[1460]: time="2025-05-17T00:20:49.608944734Z" level=info msg="CreateContainer within sandbox \"b1090b34c454c1c1bc9f8151e1d885d1fde5963f2b9cfea0d346fcae308dbcdb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 00:20:49.814507 containerd[1460]: time="2025-05-17T00:20:49.814449317Z" level=info msg="CreateContainer within sandbox \"b1090b34c454c1c1bc9f8151e1d885d1fde5963f2b9cfea0d346fcae308dbcdb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b7ca900df192060cfc058914189d746cf86eb0102fd2057070a38275b66c1c28\"" May 17 00:20:49.814867 containerd[1460]: time="2025-05-17T00:20:49.814839662Z" level=info msg="StartContainer for \"b7ca900df192060cfc058914189d746cf86eb0102fd2057070a38275b66c1c28\"" May 17 00:20:49.849894 systemd[1]: Started cri-containerd-b7ca900df192060cfc058914189d746cf86eb0102fd2057070a38275b66c1c28.scope - libcontainer container b7ca900df192060cfc058914189d746cf86eb0102fd2057070a38275b66c1c28. May 17 00:20:49.889638 containerd[1460]: time="2025-05-17T00:20:49.889492002Z" level=info msg="StartContainer for \"b7ca900df192060cfc058914189d746cf86eb0102fd2057070a38275b66c1c28\" returns successfully" May 17 00:20:50.233962 kubelet[2491]: E0517 00:20:50.233580 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zkcv" podUID="7495b3bd-a626-4600-9f8e-cc5963e6df5a" May 17 00:20:50.279321 kubelet[2491]: E0517 00:20:50.279264 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:50.289447 kubelet[2491]: I0517 00:20:50.289393 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cd9659f5d-vpq2k" podStartSLOduration=1.327330483 podStartE2EDuration="4.289378698s" podCreationTimestamp="2025-05-17 00:20:46 +0000 UTC" firstStartedPulling="2025-05-17 00:20:46.628654035 +0000 UTC m=+18.481302504" lastFinishedPulling="2025-05-17 00:20:49.59070225 +0000 UTC m=+21.443350719" observedRunningTime="2025-05-17 00:20:50.289370873 +0000 UTC m=+22.142019343" watchObservedRunningTime="2025-05-17 00:20:50.289378698 +0000 UTC m=+22.142027167" May 17 00:20:50.360274 kubelet[2491]: E0517 00:20:50.360237 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.360274 kubelet[2491]: W0517 00:20:50.360266 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.360456 kubelet[2491]: E0517 00:20:50.360290 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.360636 kubelet[2491]: E0517 00:20:50.360618 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.360636 kubelet[2491]: W0517 00:20:50.360632 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.360730 kubelet[2491]: E0517 00:20:50.360641 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.360906 kubelet[2491]: E0517 00:20:50.360887 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.360906 kubelet[2491]: W0517 00:20:50.360904 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.360990 kubelet[2491]: E0517 00:20:50.360916 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.361167 kubelet[2491]: E0517 00:20:50.361147 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.361167 kubelet[2491]: W0517 00:20:50.361159 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.361167 kubelet[2491]: E0517 00:20:50.361168 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.361481 kubelet[2491]: E0517 00:20:50.361448 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.361520 kubelet[2491]: W0517 00:20:50.361471 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.361520 kubelet[2491]: E0517 00:20:50.361502 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.361758 kubelet[2491]: E0517 00:20:50.361738 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.361758 kubelet[2491]: W0517 00:20:50.361747 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.361758 kubelet[2491]: E0517 00:20:50.361757 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.361969 kubelet[2491]: E0517 00:20:50.361955 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.361969 kubelet[2491]: W0517 00:20:50.361964 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.362015 kubelet[2491]: E0517 00:20:50.361971 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.362171 kubelet[2491]: E0517 00:20:50.362158 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.362171 kubelet[2491]: W0517 00:20:50.362166 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.362222 kubelet[2491]: E0517 00:20:50.362174 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.362406 kubelet[2491]: E0517 00:20:50.362393 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.362406 kubelet[2491]: W0517 00:20:50.362401 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.362451 kubelet[2491]: E0517 00:20:50.362409 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.362618 kubelet[2491]: E0517 00:20:50.362605 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.362618 kubelet[2491]: W0517 00:20:50.362614 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.362671 kubelet[2491]: E0517 00:20:50.362621 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.362844 kubelet[2491]: E0517 00:20:50.362831 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.362844 kubelet[2491]: W0517 00:20:50.362840 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.362896 kubelet[2491]: E0517 00:20:50.362847 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.363049 kubelet[2491]: E0517 00:20:50.363036 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.363049 kubelet[2491]: W0517 00:20:50.363044 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.363151 kubelet[2491]: E0517 00:20:50.363051 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.363256 kubelet[2491]: E0517 00:20:50.363243 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.363256 kubelet[2491]: W0517 00:20:50.363252 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.363298 kubelet[2491]: E0517 00:20:50.363259 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.363437 kubelet[2491]: E0517 00:20:50.363424 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.363437 kubelet[2491]: W0517 00:20:50.363432 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.363492 kubelet[2491]: E0517 00:20:50.363439 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.363629 kubelet[2491]: E0517 00:20:50.363616 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.363629 kubelet[2491]: W0517 00:20:50.363624 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.363671 kubelet[2491]: E0517 00:20:50.363632 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.370034 kubelet[2491]: E0517 00:20:50.370009 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.370034 kubelet[2491]: W0517 00:20:50.370024 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.370034 kubelet[2491]: E0517 00:20:50.370035 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.370271 kubelet[2491]: E0517 00:20:50.370255 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.370271 kubelet[2491]: W0517 00:20:50.370266 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.370325 kubelet[2491]: E0517 00:20:50.370280 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.370632 kubelet[2491]: E0517 00:20:50.370598 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.370668 kubelet[2491]: W0517 00:20:50.370630 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.370668 kubelet[2491]: E0517 00:20:50.370662 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.370951 kubelet[2491]: E0517 00:20:50.370934 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.370951 kubelet[2491]: W0517 00:20:50.370945 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.371012 kubelet[2491]: E0517 00:20:50.370960 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.371169 kubelet[2491]: E0517 00:20:50.371152 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.371169 kubelet[2491]: W0517 00:20:50.371166 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.371225 kubelet[2491]: E0517 00:20:50.371182 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.371406 kubelet[2491]: E0517 00:20:50.371389 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.371406 kubelet[2491]: W0517 00:20:50.371399 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.371458 kubelet[2491]: E0517 00:20:50.371413 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.371635 kubelet[2491]: E0517 00:20:50.371621 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.371635 kubelet[2491]: W0517 00:20:50.371632 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.371698 kubelet[2491]: E0517 00:20:50.371651 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.372177 kubelet[2491]: E0517 00:20:50.372157 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.372177 kubelet[2491]: W0517 00:20:50.372174 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.372241 kubelet[2491]: E0517 00:20:50.372185 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.372529 kubelet[2491]: E0517 00:20:50.372363 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.372529 kubelet[2491]: W0517 00:20:50.372372 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.372529 kubelet[2491]: E0517 00:20:50.372387 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.372639 kubelet[2491]: E0517 00:20:50.372615 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.372639 kubelet[2491]: W0517 00:20:50.372636 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.372691 kubelet[2491]: E0517 00:20:50.372659 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.373106 kubelet[2491]: E0517 00:20:50.372915 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.373106 kubelet[2491]: W0517 00:20:50.372928 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.373106 kubelet[2491]: E0517 00:20:50.372947 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.373309 kubelet[2491]: E0517 00:20:50.373290 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.373309 kubelet[2491]: W0517 00:20:50.373306 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.373363 kubelet[2491]: E0517 00:20:50.373316 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.373536 kubelet[2491]: E0517 00:20:50.373518 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.373536 kubelet[2491]: W0517 00:20:50.373532 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.373599 kubelet[2491]: E0517 00:20:50.373543 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.373740 kubelet[2491]: E0517 00:20:50.373721 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.373740 kubelet[2491]: W0517 00:20:50.373736 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.373822 kubelet[2491]: E0517 00:20:50.373745 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.374123 kubelet[2491]: E0517 00:20:50.373935 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.374123 kubelet[2491]: W0517 00:20:50.373948 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.374123 kubelet[2491]: E0517 00:20:50.373957 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.374218 kubelet[2491]: E0517 00:20:50.374147 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.374218 kubelet[2491]: W0517 00:20:50.374155 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.374218 kubelet[2491]: E0517 00:20:50.374163 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.374598 kubelet[2491]: E0517 00:20:50.374577 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.374598 kubelet[2491]: W0517 00:20:50.374593 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.374665 kubelet[2491]: E0517 00:20:50.374603 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:50.375727 kubelet[2491]: E0517 00:20:50.374797 2491 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:20:50.375727 kubelet[2491]: W0517 00:20:50.374821 2491 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:20:50.375727 kubelet[2491]: E0517 00:20:50.374830 2491 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:20:51.087507 containerd[1460]: time="2025-05-17T00:20:51.087432715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:51.088374 containerd[1460]: time="2025-05-17T00:20:51.088320743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4441619" May 17 00:20:51.089526 containerd[1460]: time="2025-05-17T00:20:51.089453142Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:51.091635 containerd[1460]: time="2025-05-17T00:20:51.091581313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:51.092231 containerd[1460]: time="2025-05-17T00:20:51.092196891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.50129767s" May 17 00:20:51.092286 containerd[1460]: time="2025-05-17T00:20:51.092229582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:20:51.094375 containerd[1460]: time="2025-05-17T00:20:51.094349127Z" level=info msg="CreateContainer within sandbox \"1566baabc9ba4f33a4b138f6c68dd5aa2332c7d5ae303fe0cbb5a662334ec4f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:20:51.109956 containerd[1460]: time="2025-05-17T00:20:51.109900241Z" level=info msg="CreateContainer within sandbox \"1566baabc9ba4f33a4b138f6c68dd5aa2332c7d5ae303fe0cbb5a662334ec4f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b2f7a50bc5342c430d846e042d76dbc3687af3219c42d02f786387bd999235a\"" May 17 00:20:51.110397 containerd[1460]: time="2025-05-17T00:20:51.110369503Z" level=info msg="StartContainer for \"6b2f7a50bc5342c430d846e042d76dbc3687af3219c42d02f786387bd999235a\"" May 17 00:20:51.143899 systemd[1]: Started cri-containerd-6b2f7a50bc5342c430d846e042d76dbc3687af3219c42d02f786387bd999235a.scope - libcontainer container 6b2f7a50bc5342c430d846e042d76dbc3687af3219c42d02f786387bd999235a. May 17 00:20:51.235991 systemd[1]: cri-containerd-6b2f7a50bc5342c430d846e042d76dbc3687af3219c42d02f786387bd999235a.scope: Deactivated successfully. May 17 00:20:51.380223 containerd[1460]: time="2025-05-17T00:20:51.380047317Z" level=info msg="StartContainer for \"6b2f7a50bc5342c430d846e042d76dbc3687af3219c42d02f786387bd999235a\" returns successfully" May 17 00:20:51.382930 kubelet[2491]: I0517 00:20:51.382903 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:20:51.383362 kubelet[2491]: E0517 00:20:51.383231 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:51.401372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b2f7a50bc5342c430d846e042d76dbc3687af3219c42d02f786387bd999235a-rootfs.mount: Deactivated successfully. May 17 00:20:52.234208 kubelet[2491]: E0517 00:20:52.234160 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zkcv" podUID="7495b3bd-a626-4600-9f8e-cc5963e6df5a" May 17 00:20:52.459944 containerd[1460]: time="2025-05-17T00:20:52.459877562Z" level=info msg="shim disconnected" id=6b2f7a50bc5342c430d846e042d76dbc3687af3219c42d02f786387bd999235a namespace=k8s.io May 17 00:20:52.459944 containerd[1460]: time="2025-05-17T00:20:52.459932035Z" level=warning msg="cleaning up after shim disconnected" id=6b2f7a50bc5342c430d846e042d76dbc3687af3219c42d02f786387bd999235a namespace=k8s.io May 17 00:20:52.459944 containerd[1460]: time="2025-05-17T00:20:52.459941733Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:53.387356 containerd[1460]: time="2025-05-17T00:20:53.387320613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:20:54.234296 kubelet[2491]: E0517 00:20:54.234219 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zkcv" podUID="7495b3bd-a626-4600-9f8e-cc5963e6df5a" May 17 00:20:56.505416 kubelet[2491]: E0517 00:20:56.504735 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4zkcv" podUID="7495b3bd-a626-4600-9f8e-cc5963e6df5a" May 17 00:20:56.530690 containerd[1460]: time="2025-05-17T00:20:56.530635750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:56.531403 containerd[1460]: time="2025-05-17T00:20:56.531368087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:20:56.532549 containerd[1460]: time="2025-05-17T00:20:56.532486147Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:56.534521 containerd[1460]: time="2025-05-17T00:20:56.534493698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:20:56.535263 containerd[1460]: time="2025-05-17T00:20:56.535231384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.147875004s" May 17 00:20:56.535292 containerd[1460]: time="2025-05-17T00:20:56.535261701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:20:56.541697 containerd[1460]: time="2025-05-17T00:20:56.541620488Z" level=info msg="CreateContainer within sandbox \"1566baabc9ba4f33a4b138f6c68dd5aa2332c7d5ae303fe0cbb5a662334ec4f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:20:56.557514 containerd[1460]: time="2025-05-17T00:20:56.557446099Z" level=info msg="CreateContainer within sandbox \"1566baabc9ba4f33a4b138f6c68dd5aa2332c7d5ae303fe0cbb5a662334ec4f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3f43843b4ad223480fbec6c6f8fd01c249efafc489dd35e2c81c117350fe3938\"" May 17 00:20:56.558090 containerd[1460]: time="2025-05-17T00:20:56.558062237Z" level=info msg="StartContainer for \"3f43843b4ad223480fbec6c6f8fd01c249efafc489dd35e2c81c117350fe3938\"" May 17 00:20:56.589989 systemd[1]: Started cri-containerd-3f43843b4ad223480fbec6c6f8fd01c249efafc489dd35e2c81c117350fe3938.scope - libcontainer container 3f43843b4ad223480fbec6c6f8fd01c249efafc489dd35e2c81c117350fe3938. May 17 00:20:56.622724 containerd[1460]: time="2025-05-17T00:20:56.622679566Z" level=info msg="StartContainer for \"3f43843b4ad223480fbec6c6f8fd01c249efafc489dd35e2c81c117350fe3938\" returns successfully" May 17 00:20:57.703821 containerd[1460]: time="2025-05-17T00:20:57.703759014Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:20:57.706497 systemd[1]: cri-containerd-3f43843b4ad223480fbec6c6f8fd01c249efafc489dd35e2c81c117350fe3938.scope: Deactivated successfully. May 17 00:20:57.726289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f43843b4ad223480fbec6c6f8fd01c249efafc489dd35e2c81c117350fe3938-rootfs.mount: Deactivated successfully. May 17 00:20:57.732510 containerd[1460]: time="2025-05-17T00:20:57.732455012Z" level=info msg="shim disconnected" id=3f43843b4ad223480fbec6c6f8fd01c249efafc489dd35e2c81c117350fe3938 namespace=k8s.io May 17 00:20:57.732510 containerd[1460]: time="2025-05-17T00:20:57.732502851Z" level=warning msg="cleaning up after shim disconnected" id=3f43843b4ad223480fbec6c6f8fd01c249efafc489dd35e2c81c117350fe3938 namespace=k8s.io May 17 00:20:57.732510 containerd[1460]: time="2025-05-17T00:20:57.732511828Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:57.733306 kubelet[2491]: I0517 00:20:57.733281 2491 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:20:57.766139 systemd[1]: Created slice kubepods-burstable-pod8b2899f4_79bc_4fef_b1ad_48d139bf5859.slice - libcontainer container kubepods-burstable-pod8b2899f4_79bc_4fef_b1ad_48d139bf5859.slice. May 17 00:20:57.775989 systemd[1]: Created slice kubepods-burstable-podc5389095_df96_41b4_8890_9e655dbc39b6.slice - libcontainer container kubepods-burstable-podc5389095_df96_41b4_8890_9e655dbc39b6.slice. May 17 00:20:57.783676 systemd[1]: Created slice kubepods-besteffort-pod7b05a098_fd89_437b_9657_38da60548e2f.slice - libcontainer container kubepods-besteffort-pod7b05a098_fd89_437b_9657_38da60548e2f.slice. May 17 00:20:57.791500 systemd[1]: Created slice kubepods-besteffort-podcb836777_e67d_4d21_a5e7_16ba9fc2ef39.slice - libcontainer container kubepods-besteffort-podcb836777_e67d_4d21_a5e7_16ba9fc2ef39.slice. May 17 00:20:57.797134 systemd[1]: Created slice kubepods-besteffort-pod592a7817_1a54_43e6_91e2_b61a4e065de1.slice - libcontainer container kubepods-besteffort-pod592a7817_1a54_43e6_91e2_b61a4e065de1.slice. May 17 00:20:57.802803 systemd[1]: Created slice kubepods-besteffort-poddcbcb463_2034_44b0_98b4_0b1740b2500e.slice - libcontainer container kubepods-besteffort-poddcbcb463_2034_44b0_98b4_0b1740b2500e.slice. May 17 00:20:57.808305 systemd[1]: Created slice kubepods-besteffort-pod27d62ebe_2b95_481d_a9b3_1fc4f1115cb0.slice - libcontainer container kubepods-besteffort-pod27d62ebe_2b95_481d_a9b3_1fc4f1115cb0.slice. May 17 00:20:57.911184 kubelet[2491]: I0517 00:20:57.911134 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dcbcb463-2034-44b0-98b4-0b1740b2500e-calico-apiserver-certs\") pod \"calico-apiserver-dbc85d568-xxnxn\" (UID: \"dcbcb463-2034-44b0-98b4-0b1740b2500e\") " pod="calico-apiserver/calico-apiserver-dbc85d568-xxnxn" May 17 00:20:57.911184 kubelet[2491]: I0517 00:20:57.911180 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgtcf\" (UniqueName: \"kubernetes.io/projected/8b2899f4-79bc-4fef-b1ad-48d139bf5859-kube-api-access-bgtcf\") pod \"coredns-7c65d6cfc9-f92hb\" (UID: \"8b2899f4-79bc-4fef-b1ad-48d139bf5859\") " pod="kube-system/coredns-7c65d6cfc9-f92hb" May 17 00:20:57.911386 kubelet[2491]: I0517 00:20:57.911198 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cb836777-e67d-4d21-a5e7-16ba9fc2ef39-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-qqvpp\" (UID: \"cb836777-e67d-4d21-a5e7-16ba9fc2ef39\") " pod="calico-system/goldmane-8f77d7b6c-qqvpp" May 17 00:20:57.911386 kubelet[2491]: I0517 00:20:57.911222 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66zkf\" (UniqueName: \"kubernetes.io/projected/cb836777-e67d-4d21-a5e7-16ba9fc2ef39-kube-api-access-66zkf\") pod \"goldmane-8f77d7b6c-qqvpp\" (UID: \"cb836777-e67d-4d21-a5e7-16ba9fc2ef39\") " pod="calico-system/goldmane-8f77d7b6c-qqvpp" May 17 00:20:57.911386 kubelet[2491]: I0517 00:20:57.911242 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw2cz\" (UniqueName: \"kubernetes.io/projected/592a7817-1a54-43e6-91e2-b61a4e065de1-kube-api-access-gw2cz\") pod \"calico-apiserver-dbc85d568-vmlk5\" (UID: \"592a7817-1a54-43e6-91e2-b61a4e065de1\") " pod="calico-apiserver/calico-apiserver-dbc85d568-vmlk5" May 17 00:20:57.911386 kubelet[2491]: I0517 00:20:57.911284 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bljtr\" (UniqueName: \"kubernetes.io/projected/dcbcb463-2034-44b0-98b4-0b1740b2500e-kube-api-access-bljtr\") pod \"calico-apiserver-dbc85d568-xxnxn\" (UID: \"dcbcb463-2034-44b0-98b4-0b1740b2500e\") " pod="calico-apiserver/calico-apiserver-dbc85d568-xxnxn" May 17 00:20:57.911386 kubelet[2491]: I0517 00:20:57.911304 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb836777-e67d-4d21-a5e7-16ba9fc2ef39-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-qqvpp\" (UID: \"cb836777-e67d-4d21-a5e7-16ba9fc2ef39\") " pod="calico-system/goldmane-8f77d7b6c-qqvpp" May 17 00:20:57.911503 kubelet[2491]: I0517 00:20:57.911342 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b05a098-fd89-437b-9657-38da60548e2f-tigera-ca-bundle\") pod \"calico-kube-controllers-5d9df4f78-vb7v8\" (UID: \"7b05a098-fd89-437b-9657-38da60548e2f\") " pod="calico-system/calico-kube-controllers-5d9df4f78-vb7v8" May 17 00:20:57.911503 kubelet[2491]: I0517 00:20:57.911395 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-whisker-ca-bundle\") pod \"whisker-579785955b-6wshv\" (UID: \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\") " pod="calico-system/whisker-579785955b-6wshv" May 17 00:20:57.911503 kubelet[2491]: I0517 00:20:57.911433 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b2899f4-79bc-4fef-b1ad-48d139bf5859-config-volume\") pod \"coredns-7c65d6cfc9-f92hb\" (UID: \"8b2899f4-79bc-4fef-b1ad-48d139bf5859\") " pod="kube-system/coredns-7c65d6cfc9-f92hb" May 17 00:20:57.911503 kubelet[2491]: I0517 00:20:57.911449 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx4fp\" (UniqueName: \"kubernetes.io/projected/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-kube-api-access-qx4fp\") pod \"whisker-579785955b-6wshv\" (UID: \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\") " pod="calico-system/whisker-579785955b-6wshv" May 17 00:20:57.911503 kubelet[2491]: I0517 00:20:57.911463 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnd9k\" (UniqueName: \"kubernetes.io/projected/7b05a098-fd89-437b-9657-38da60548e2f-kube-api-access-nnd9k\") pod \"calico-kube-controllers-5d9df4f78-vb7v8\" (UID: \"7b05a098-fd89-437b-9657-38da60548e2f\") " pod="calico-system/calico-kube-controllers-5d9df4f78-vb7v8" May 17 00:20:57.911625 kubelet[2491]: I0517 00:20:57.911478 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5389095-df96-41b4-8890-9e655dbc39b6-config-volume\") pod \"coredns-7c65d6cfc9-75lgr\" (UID: \"c5389095-df96-41b4-8890-9e655dbc39b6\") " pod="kube-system/coredns-7c65d6cfc9-75lgr" May 17 00:20:57.911625 kubelet[2491]: I0517 00:20:57.911496 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krfn4\" (UniqueName: \"kubernetes.io/projected/c5389095-df96-41b4-8890-9e655dbc39b6-kube-api-access-krfn4\") pod \"coredns-7c65d6cfc9-75lgr\" (UID: \"c5389095-df96-41b4-8890-9e655dbc39b6\") " pod="kube-system/coredns-7c65d6cfc9-75lgr" May 17 00:20:57.911625 kubelet[2491]: I0517 00:20:57.911509 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-whisker-backend-key-pair\") pod \"whisker-579785955b-6wshv\" (UID: \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\") " pod="calico-system/whisker-579785955b-6wshv" May 17 00:20:57.911625 kubelet[2491]: I0517 00:20:57.911527 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb836777-e67d-4d21-a5e7-16ba9fc2ef39-config\") pod \"goldmane-8f77d7b6c-qqvpp\" (UID: \"cb836777-e67d-4d21-a5e7-16ba9fc2ef39\") " pod="calico-system/goldmane-8f77d7b6c-qqvpp" May 17 00:20:57.911625 kubelet[2491]: I0517 00:20:57.911540 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/592a7817-1a54-43e6-91e2-b61a4e065de1-calico-apiserver-certs\") pod \"calico-apiserver-dbc85d568-vmlk5\" (UID: \"592a7817-1a54-43e6-91e2-b61a4e065de1\") " pod="calico-apiserver/calico-apiserver-dbc85d568-vmlk5" May 17 00:20:58.071135 kubelet[2491]: E0517 00:20:58.071011 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:58.071823 containerd[1460]: time="2025-05-17T00:20:58.071744410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f92hb,Uid:8b2899f4-79bc-4fef-b1ad-48d139bf5859,Namespace:kube-system,Attempt:0,}" May 17 00:20:58.079503 kubelet[2491]: E0517 00:20:58.079464 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:20:58.080011 containerd[1460]: time="2025-05-17T00:20:58.079968909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-75lgr,Uid:c5389095-df96-41b4-8890-9e655dbc39b6,Namespace:kube-system,Attempt:0,}" May 17 00:20:58.089839 containerd[1460]: time="2025-05-17T00:20:58.089670372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9df4f78-vb7v8,Uid:7b05a098-fd89-437b-9657-38da60548e2f,Namespace:calico-system,Attempt:0,}" May 17 00:20:58.095710 containerd[1460]: time="2025-05-17T00:20:58.095684539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-qqvpp,Uid:cb836777-e67d-4d21-a5e7-16ba9fc2ef39,Namespace:calico-system,Attempt:0,}" May 17 00:20:58.099877 containerd[1460]: time="2025-05-17T00:20:58.099752569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbc85d568-vmlk5,Uid:592a7817-1a54-43e6-91e2-b61a4e065de1,Namespace:calico-apiserver,Attempt:0,}" May 17 00:20:58.105743 containerd[1460]: time="2025-05-17T00:20:58.105708657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbc85d568-xxnxn,Uid:dcbcb463-2034-44b0-98b4-0b1740b2500e,Namespace:calico-apiserver,Attempt:0,}" May 17 00:20:58.111707 containerd[1460]: time="2025-05-17T00:20:58.111471131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-579785955b-6wshv,Uid:27d62ebe-2b95-481d-a9b3-1fc4f1115cb0,Namespace:calico-system,Attempt:0,}" May 17 00:20:58.174787 containerd[1460]: time="2025-05-17T00:20:58.173539719Z" level=error msg="Failed to destroy network for sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.174787 containerd[1460]: time="2025-05-17T00:20:58.173882743Z" level=error msg="encountered an error cleaning up failed sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.174787 containerd[1460]: time="2025-05-17T00:20:58.173928880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-75lgr,Uid:c5389095-df96-41b4-8890-9e655dbc39b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.179624 containerd[1460]: time="2025-05-17T00:20:58.179581728Z" level=error msg="Failed to destroy network for sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.180295 containerd[1460]: time="2025-05-17T00:20:58.180266144Z" level=error msg="encountered an error cleaning up failed sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.180460 containerd[1460]: time="2025-05-17T00:20:58.180439449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f92hb,Uid:8b2899f4-79bc-4fef-b1ad-48d139bf5859,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.196039 kubelet[2491]: E0517 00:20:58.195993 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.196337 kubelet[2491]: E0517 00:20:58.196289 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f92hb" May 17 00:20:58.196419 kubelet[2491]: E0517 00:20:58.196405 2491 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-f92hb" May 17 00:20:58.196533 kubelet[2491]: E0517 00:20:58.196510 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-f92hb_kube-system(8b2899f4-79bc-4fef-b1ad-48d139bf5859)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-f92hb_kube-system(8b2899f4-79bc-4fef-b1ad-48d139bf5859)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f92hb" podUID="8b2899f4-79bc-4fef-b1ad-48d139bf5859" May 17 00:20:58.197597 kubelet[2491]: E0517 00:20:58.196208 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.197689 kubelet[2491]: E0517 00:20:58.197673 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-75lgr" May 17 00:20:58.197746 kubelet[2491]: E0517 00:20:58.197733 2491 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-75lgr" May 17 00:20:58.197884 kubelet[2491]: E0517 00:20:58.197854 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-75lgr_kube-system(c5389095-df96-41b4-8890-9e655dbc39b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-75lgr_kube-system(c5389095-df96-41b4-8890-9e655dbc39b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-75lgr" podUID="c5389095-df96-41b4-8890-9e655dbc39b6" May 17 00:20:58.237485 containerd[1460]: time="2025-05-17T00:20:58.237412926Z" level=error msg="Failed to destroy network for sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.240447 containerd[1460]: time="2025-05-17T00:20:58.240405517Z" level=error msg="encountered an error cleaning up failed sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.240677 containerd[1460]: time="2025-05-17T00:20:58.240655808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9df4f78-vb7v8,Uid:7b05a098-fd89-437b-9657-38da60548e2f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.241123 kubelet[2491]: E0517 00:20:58.241071 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.241255 kubelet[2491]: E0517 00:20:58.241233 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d9df4f78-vb7v8" May 17 00:20:58.241352 kubelet[2491]: E0517 00:20:58.241337 2491 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d9df4f78-vb7v8" May 17 00:20:58.241486 kubelet[2491]: E0517 00:20:58.241451 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d9df4f78-vb7v8_calico-system(7b05a098-fd89-437b-9657-38da60548e2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d9df4f78-vb7v8_calico-system(7b05a098-fd89-437b-9657-38da60548e2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d9df4f78-vb7v8" podUID="7b05a098-fd89-437b-9657-38da60548e2f" May 17 00:20:58.242114 containerd[1460]: time="2025-05-17T00:20:58.242071066Z" level=error msg="Failed to destroy network for sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.242600 containerd[1460]: time="2025-05-17T00:20:58.242574131Z" level=error msg="encountered an error cleaning up failed sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.242696 containerd[1460]: time="2025-05-17T00:20:58.242676723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-qqvpp,Uid:cb836777-e67d-4d21-a5e7-16ba9fc2ef39,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.244078 kubelet[2491]: E0517 00:20:58.244033 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.244223 kubelet[2491]: E0517 00:20:58.244092 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-qqvpp" May 17 00:20:58.244223 kubelet[2491]: E0517 00:20:58.244117 2491 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-qqvpp" May 17 00:20:58.244223 kubelet[2491]: E0517 00:20:58.244156 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-qqvpp_calico-system(cb836777-e67d-4d21-a5e7-16ba9fc2ef39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-qqvpp_calico-system(cb836777-e67d-4d21-a5e7-16ba9fc2ef39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-qqvpp" podUID="cb836777-e67d-4d21-a5e7-16ba9fc2ef39" May 17 00:20:58.245669 systemd[1]: Created slice kubepods-besteffort-pod7495b3bd_a626_4600_9f8e_cc5963e6df5a.slice - libcontainer container kubepods-besteffort-pod7495b3bd_a626_4600_9f8e_cc5963e6df5a.slice. May 17 00:20:58.250431 containerd[1460]: time="2025-05-17T00:20:58.250282240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zkcv,Uid:7495b3bd-a626-4600-9f8e-cc5963e6df5a,Namespace:calico-system,Attempt:0,}" May 17 00:20:58.254826 containerd[1460]: time="2025-05-17T00:20:58.254746536Z" level=error msg="Failed to destroy network for sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.255558 containerd[1460]: time="2025-05-17T00:20:58.255522243Z" level=error msg="encountered an error cleaning up failed sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.255604 containerd[1460]: time="2025-05-17T00:20:58.255575192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbc85d568-xxnxn,Uid:dcbcb463-2034-44b0-98b4-0b1740b2500e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.255820 kubelet[2491]: E0517 00:20:58.255762 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.255867 kubelet[2491]: E0517 00:20:58.255838 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dbc85d568-xxnxn" May 17 00:20:58.255867 kubelet[2491]: E0517 00:20:58.255860 2491 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dbc85d568-xxnxn" May 17 00:20:58.255920 kubelet[2491]: E0517 00:20:58.255896 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dbc85d568-xxnxn_calico-apiserver(dcbcb463-2034-44b0-98b4-0b1740b2500e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dbc85d568-xxnxn_calico-apiserver(dcbcb463-2034-44b0-98b4-0b1740b2500e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dbc85d568-xxnxn" podUID="dcbcb463-2034-44b0-98b4-0b1740b2500e" May 17 00:20:58.266786 containerd[1460]: time="2025-05-17T00:20:58.264824445Z" level=error msg="Failed to destroy network for sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.266786 containerd[1460]: time="2025-05-17T00:20:58.265376452Z" level=error msg="encountered an error cleaning up failed sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.266786 containerd[1460]: time="2025-05-17T00:20:58.265432828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-579785955b-6wshv,Uid:27d62ebe-2b95-481d-a9b3-1fc4f1115cb0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.266990 kubelet[2491]: E0517 00:20:58.265644 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.266990 kubelet[2491]: E0517 00:20:58.265706 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-579785955b-6wshv" May 17 00:20:58.266990 kubelet[2491]: E0517 00:20:58.265730 2491 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-579785955b-6wshv" May 17 00:20:58.267080 kubelet[2491]: E0517 00:20:58.265804 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-579785955b-6wshv_calico-system(27d62ebe-2b95-481d-a9b3-1fc4f1115cb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-579785955b-6wshv_calico-system(27d62ebe-2b95-481d-a9b3-1fc4f1115cb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-579785955b-6wshv" podUID="27d62ebe-2b95-481d-a9b3-1fc4f1115cb0" May 17 00:20:58.276141 containerd[1460]: time="2025-05-17T00:20:58.276112268Z" level=error msg="Failed to destroy network for sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.276585 containerd[1460]: time="2025-05-17T00:20:58.276550471Z" level=error msg="encountered an error cleaning up failed sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.276585 containerd[1460]: time="2025-05-17T00:20:58.276591769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbc85d568-vmlk5,Uid:592a7817-1a54-43e6-91e2-b61a4e065de1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.276950 kubelet[2491]: E0517 00:20:58.276745 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.276950 kubelet[2491]: E0517 00:20:58.276802 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dbc85d568-vmlk5" May 17 00:20:58.276950 kubelet[2491]: E0517 00:20:58.276819 2491 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dbc85d568-vmlk5" May 17 00:20:58.277042 kubelet[2491]: E0517 00:20:58.276859 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dbc85d568-vmlk5_calico-apiserver(592a7817-1a54-43e6-91e2-b61a4e065de1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dbc85d568-vmlk5_calico-apiserver(592a7817-1a54-43e6-91e2-b61a4e065de1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dbc85d568-vmlk5" podUID="592a7817-1a54-43e6-91e2-b61a4e065de1" May 17 00:20:58.311225 containerd[1460]: time="2025-05-17T00:20:58.311182663Z" level=error msg="Failed to destroy network for sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.311612 containerd[1460]: time="2025-05-17T00:20:58.311589407Z" level=error msg="encountered an error cleaning up failed sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.311654 containerd[1460]: time="2025-05-17T00:20:58.311636375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zkcv,Uid:7495b3bd-a626-4600-9f8e-cc5963e6df5a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.311823 kubelet[2491]: E0517 00:20:58.311792 2491 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.311886 kubelet[2491]: E0517 00:20:58.311843 2491 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zkcv" May 17 00:20:58.311886 kubelet[2491]: E0517 00:20:58.311860 2491 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4zkcv" May 17 00:20:58.311938 kubelet[2491]: E0517 00:20:58.311899 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4zkcv_calico-system(7495b3bd-a626-4600-9f8e-cc5963e6df5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4zkcv_calico-system(7495b3bd-a626-4600-9f8e-cc5963e6df5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4zkcv" podUID="7495b3bd-a626-4600-9f8e-cc5963e6df5a" May 17 00:20:58.516958 kubelet[2491]: I0517 00:20:58.516926 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:20:58.518039 kubelet[2491]: I0517 00:20:58.518005 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:20:58.519369 kubelet[2491]: I0517 00:20:58.519351 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:20:58.519914 containerd[1460]: time="2025-05-17T00:20:58.519880844Z" level=info msg="StopPodSandbox for \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\"" May 17 00:20:58.520661 containerd[1460]: time="2025-05-17T00:20:58.520243165Z" level=info msg="StopPodSandbox for \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\"" May 17 00:20:58.520661 containerd[1460]: time="2025-05-17T00:20:58.520422653Z" level=info msg="Ensure that sandbox a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3 in task-service has been cleanup successfully" May 17 00:20:58.520661 containerd[1460]: time="2025-05-17T00:20:58.520546435Z" level=info msg="StopPodSandbox for \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\"" May 17 00:20:58.520661 containerd[1460]: time="2025-05-17T00:20:58.520580078Z" level=info msg="Ensure that sandbox 8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824 in task-service has been cleanup successfully" May 17 00:20:58.520758 containerd[1460]: time="2025-05-17T00:20:58.520668053Z" level=info msg="Ensure that sandbox 423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a in task-service has been cleanup successfully" May 17 00:20:58.522534 containerd[1460]: time="2025-05-17T00:20:58.522513069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:20:58.523114 kubelet[2491]: I0517 00:20:58.523088 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:20:58.523833 containerd[1460]: time="2025-05-17T00:20:58.523800057Z" level=info msg="StopPodSandbox for \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\"" May 17 00:20:58.524277 kubelet[2491]: I0517 00:20:58.524255 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:20:58.524617 containerd[1460]: time="2025-05-17T00:20:58.524401376Z" level=info msg="Ensure that sandbox 47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c in task-service has been cleanup successfully" May 17 00:20:58.525487 containerd[1460]: time="2025-05-17T00:20:58.524610208Z" level=info msg="StopPodSandbox for \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\"" May 17 00:20:58.525540 containerd[1460]: time="2025-05-17T00:20:58.525530597Z" level=info msg="Ensure that sandbox bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7 in task-service has been cleanup successfully" May 17 00:20:58.526423 kubelet[2491]: I0517 00:20:58.526387 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:20:58.526813 containerd[1460]: time="2025-05-17T00:20:58.526748135Z" level=info msg="StopPodSandbox for \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\"" May 17 00:20:58.526970 containerd[1460]: time="2025-05-17T00:20:58.526928904Z" level=info msg="Ensure that sandbox 21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13 in task-service has been cleanup successfully" May 17 00:20:58.528382 kubelet[2491]: I0517 00:20:58.528357 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:20:58.528702 containerd[1460]: time="2025-05-17T00:20:58.528676817Z" level=info msg="StopPodSandbox for \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\"" May 17 00:20:58.529572 kubelet[2491]: I0517 00:20:58.529556 2491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:20:58.530192 containerd[1460]: time="2025-05-17T00:20:58.529931844Z" level=info msg="StopPodSandbox for \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\"" May 17 00:20:58.530783 containerd[1460]: time="2025-05-17T00:20:58.530575904Z" level=info msg="Ensure that sandbox ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486 in task-service has been cleanup successfully" May 17 00:20:58.531048 containerd[1460]: time="2025-05-17T00:20:58.531031360Z" level=info msg="Ensure that sandbox 4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94 in task-service has been cleanup successfully" May 17 00:20:58.599837 containerd[1460]: time="2025-05-17T00:20:58.599778312Z" level=error msg="StopPodSandbox for \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\" failed" error="failed to destroy network for sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.600280 kubelet[2491]: E0517 00:20:58.600229 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:20:58.600350 kubelet[2491]: E0517 00:20:58.600293 2491 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c"} May 17 00:20:58.600395 kubelet[2491]: E0517 00:20:58.600370 2491 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7495b3bd-a626-4600-9f8e-cc5963e6df5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:20:58.600472 kubelet[2491]: E0517 00:20:58.600404 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7495b3bd-a626-4600-9f8e-cc5963e6df5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4zkcv" podUID="7495b3bd-a626-4600-9f8e-cc5963e6df5a" May 17 00:20:58.601536 containerd[1460]: time="2025-05-17T00:20:58.601192278Z" level=error msg="StopPodSandbox for \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\" failed" error="failed to destroy network for sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.601684 kubelet[2491]: E0517 00:20:58.601356 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:20:58.601684 kubelet[2491]: E0517 00:20:58.601384 2491 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3"} May 17 00:20:58.601684 kubelet[2491]: E0517 00:20:58.601411 2491 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:20:58.601684 kubelet[2491]: E0517 00:20:58.601434 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-579785955b-6wshv" podUID="27d62ebe-2b95-481d-a9b3-1fc4f1115cb0" May 17 00:20:58.603717 containerd[1460]: time="2025-05-17T00:20:58.603687544Z" level=error msg="StopPodSandbox for \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\" failed" error="failed to destroy network for sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.604128 kubelet[2491]: E0517 00:20:58.604074 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:20:58.604206 kubelet[2491]: E0517 00:20:58.604128 2491 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7"} May 17 00:20:58.604206 kubelet[2491]: E0517 00:20:58.604174 2491 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dcbcb463-2034-44b0-98b4-0b1740b2500e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:20:58.604276 kubelet[2491]: E0517 00:20:58.604198 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dcbcb463-2034-44b0-98b4-0b1740b2500e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dbc85d568-xxnxn" podUID="dcbcb463-2034-44b0-98b4-0b1740b2500e" May 17 00:20:58.605595 containerd[1460]: time="2025-05-17T00:20:58.605558168Z" level=error msg="StopPodSandbox for \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\" failed" error="failed to destroy network for sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.605705 kubelet[2491]: E0517 00:20:58.605682 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:20:58.605705 kubelet[2491]: E0517 00:20:58.605707 2491 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824"} May 17 00:20:58.605804 kubelet[2491]: E0517 00:20:58.605738 2491 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b2899f4-79bc-4fef-b1ad-48d139bf5859\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:20:58.605910 containerd[1460]: time="2025-05-17T00:20:58.605884571Z" level=error msg="StopPodSandbox for \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\" failed" error="failed to destroy network for sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.605962 kubelet[2491]: E0517 00:20:58.605762 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b2899f4-79bc-4fef-b1ad-48d139bf5859\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-f92hb" podUID="8b2899f4-79bc-4fef-b1ad-48d139bf5859" May 17 00:20:58.606044 kubelet[2491]: E0517 00:20:58.606023 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:20:58.606087 kubelet[2491]: E0517 00:20:58.606045 2491 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486"} May 17 00:20:58.606087 kubelet[2491]: E0517 00:20:58.606076 2491 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"592a7817-1a54-43e6-91e2-b61a4e065de1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:20:58.606144 kubelet[2491]: E0517 00:20:58.606093 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"592a7817-1a54-43e6-91e2-b61a4e065de1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dbc85d568-vmlk5" podUID="592a7817-1a54-43e6-91e2-b61a4e065de1" May 17 00:20:58.606440 containerd[1460]: time="2025-05-17T00:20:58.606411661Z" level=error msg="StopPodSandbox for \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\" failed" error="failed to destroy network for sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.606649 containerd[1460]: time="2025-05-17T00:20:58.606603542Z" level=error msg="StopPodSandbox for \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\" failed" error="failed to destroy network for sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.606876 kubelet[2491]: E0517 00:20:58.606671 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:20:58.606876 kubelet[2491]: E0517 00:20:58.606702 2491 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13"} May 17 00:20:58.606876 kubelet[2491]: E0517 00:20:58.606725 2491 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb836777-e67d-4d21-a5e7-16ba9fc2ef39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:20:58.606876 kubelet[2491]: E0517 00:20:58.606742 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb836777-e67d-4d21-a5e7-16ba9fc2ef39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-qqvpp" podUID="cb836777-e67d-4d21-a5e7-16ba9fc2ef39" May 17 00:20:58.607037 kubelet[2491]: E0517 00:20:58.606756 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:20:58.607037 kubelet[2491]: E0517 00:20:58.606802 2491 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a"} May 17 00:20:58.607037 kubelet[2491]: E0517 00:20:58.606822 2491 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5389095-df96-41b4-8890-9e655dbc39b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:20:58.607037 kubelet[2491]: E0517 00:20:58.606851 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5389095-df96-41b4-8890-9e655dbc39b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-75lgr" podUID="c5389095-df96-41b4-8890-9e655dbc39b6" May 17 00:20:58.608128 containerd[1460]: time="2025-05-17T00:20:58.608096216Z" level=error msg="StopPodSandbox for \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\" failed" error="failed to destroy network for sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:20:58.608288 kubelet[2491]: E0517 00:20:58.608231 2491 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:20:58.608288 kubelet[2491]: E0517 00:20:58.608271 2491 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94"} May 17 00:20:58.608364 kubelet[2491]: E0517 00:20:58.608313 2491 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b05a098-fd89-437b-9657-38da60548e2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:20:58.608364 kubelet[2491]: E0517 00:20:58.608353 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b05a098-fd89-437b-9657-38da60548e2f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d9df4f78-vb7v8" podUID="7b05a098-fd89-437b-9657-38da60548e2f" May 17 00:20:58.729190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824-shm.mount: Deactivated successfully. May 17 00:21:02.914526 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:39318.service - OpenSSH per-connection server daemon (10.0.0.1:39318). May 17 00:21:03.055615 sshd[3675]: Accepted publickey for core from 10.0.0.1 port 39318 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:03.057586 sshd[3675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:03.062162 systemd-logind[1446]: New session 8 of user core. May 17 00:21:03.067885 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:21:03.204797 sshd[3675]: pam_unix(sshd:session): session closed for user core May 17 00:21:03.208080 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:39318.service: Deactivated successfully. May 17 00:21:03.209956 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:21:03.211186 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. May 17 00:21:03.212242 systemd-logind[1446]: Removed session 8. May 17 00:21:03.454691 kubelet[2491]: I0517 00:21:03.454643 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:21:03.455399 kubelet[2491]: E0517 00:21:03.455057 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:03.541520 kubelet[2491]: E0517 00:21:03.540910 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:04.910274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033775476.mount: Deactivated successfully. May 17 00:21:05.352719 containerd[1460]: time="2025-05-17T00:21:05.350503632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:21:05.353794 containerd[1460]: time="2025-05-17T00:21:05.349557555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:05.354416 containerd[1460]: time="2025-05-17T00:21:05.354390448Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:05.355092 containerd[1460]: time="2025-05-17T00:21:05.355055167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.83246332s" May 17 00:21:05.355128 containerd[1460]: time="2025-05-17T00:21:05.355093700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:21:05.355676 containerd[1460]: time="2025-05-17T00:21:05.355628173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:05.363357 containerd[1460]: time="2025-05-17T00:21:05.363314924Z" level=info msg="CreateContainer within sandbox \"1566baabc9ba4f33a4b138f6c68dd5aa2332c7d5ae303fe0cbb5a662334ec4f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:21:05.379276 containerd[1460]: time="2025-05-17T00:21:05.379230273Z" level=info msg="CreateContainer within sandbox \"1566baabc9ba4f33a4b138f6c68dd5aa2332c7d5ae303fe0cbb5a662334ec4f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"fe947581fe56edfbfd619a4df4bfe5111a06c70ca90f9a3e9ef2eacebb066247\"" May 17 00:21:05.379725 containerd[1460]: time="2025-05-17T00:21:05.379699955Z" level=info msg="StartContainer for \"fe947581fe56edfbfd619a4df4bfe5111a06c70ca90f9a3e9ef2eacebb066247\"" May 17 00:21:05.440919 systemd[1]: Started cri-containerd-fe947581fe56edfbfd619a4df4bfe5111a06c70ca90f9a3e9ef2eacebb066247.scope - libcontainer container fe947581fe56edfbfd619a4df4bfe5111a06c70ca90f9a3e9ef2eacebb066247. May 17 00:21:05.608886 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:21:05.609071 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:21:05.731632 containerd[1460]: time="2025-05-17T00:21:05.731543928Z" level=info msg="StartContainer for \"fe947581fe56edfbfd619a4df4bfe5111a06c70ca90f9a3e9ef2eacebb066247\" returns successfully" May 17 00:21:05.812639 containerd[1460]: time="2025-05-17T00:21:05.811674756Z" level=info msg="StopPodSandbox for \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\"" May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.867 [INFO][3765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.869 [INFO][3765] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" iface="eth0" netns="/var/run/netns/cni-e4492c73-76ad-2806-4ead-b5b15041bd8f" May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.869 [INFO][3765] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" iface="eth0" netns="/var/run/netns/cni-e4492c73-76ad-2806-4ead-b5b15041bd8f" May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.870 [INFO][3765] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" iface="eth0" netns="/var/run/netns/cni-e4492c73-76ad-2806-4ead-b5b15041bd8f" May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.870 [INFO][3765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.870 [INFO][3765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.936 [INFO][3775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" HandleID="k8s-pod-network.a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" Workload="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.936 [INFO][3775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.936 [INFO][3775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.965 [WARNING][3775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" HandleID="k8s-pod-network.a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" Workload="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.965 [INFO][3775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" HandleID="k8s-pod-network.a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" Workload="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.967 [INFO][3775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:05.977254 containerd[1460]: 2025-05-17 00:21:05.973 [INFO][3765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:05.977948 containerd[1460]: time="2025-05-17T00:21:05.977921788Z" level=info msg="TearDown network for sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\" successfully" May 17 00:21:05.978030 containerd[1460]: time="2025-05-17T00:21:05.977990999Z" level=info msg="StopPodSandbox for \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\" returns successfully" May 17 00:21:05.980702 systemd[1]: run-netns-cni\x2de4492c73\x2d76ad\x2d2806\x2d4ead\x2db5b15041bd8f.mount: Deactivated successfully. May 17 00:21:06.167319 kubelet[2491]: I0517 00:21:06.167250 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx4fp\" (UniqueName: \"kubernetes.io/projected/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-kube-api-access-qx4fp\") pod \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\" (UID: \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\") " May 17 00:21:06.167319 kubelet[2491]: I0517 00:21:06.167310 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-whisker-ca-bundle\") pod \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\" (UID: \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\") " May 17 00:21:06.167837 kubelet[2491]: I0517 00:21:06.167339 2491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-whisker-backend-key-pair\") pod \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\" (UID: \"27d62ebe-2b95-481d-a9b3-1fc4f1115cb0\") " May 17 00:21:06.168077 kubelet[2491]: I0517 00:21:06.167983 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "27d62ebe-2b95-481d-a9b3-1fc4f1115cb0" (UID: "27d62ebe-2b95-481d-a9b3-1fc4f1115cb0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 00:21:06.172719 kubelet[2491]: I0517 00:21:06.172680 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "27d62ebe-2b95-481d-a9b3-1fc4f1115cb0" (UID: "27d62ebe-2b95-481d-a9b3-1fc4f1115cb0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 00:21:06.173043 kubelet[2491]: I0517 00:21:06.172989 2491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-kube-api-access-qx4fp" (OuterVolumeSpecName: "kube-api-access-qx4fp") pod "27d62ebe-2b95-481d-a9b3-1fc4f1115cb0" (UID: "27d62ebe-2b95-481d-a9b3-1fc4f1115cb0"). InnerVolumeSpecName "kube-api-access-qx4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 00:21:06.174872 systemd[1]: var-lib-kubelet-pods-27d62ebe\x2d2b95\x2d481d\x2da9b3\x2d1fc4f1115cb0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqx4fp.mount: Deactivated successfully. May 17 00:21:06.175022 systemd[1]: var-lib-kubelet-pods-27d62ebe\x2d2b95\x2d481d\x2da9b3\x2d1fc4f1115cb0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 00:21:06.241751 systemd[1]: Removed slice kubepods-besteffort-pod27d62ebe_2b95_481d_a9b3_1fc4f1115cb0.slice - libcontainer container kubepods-besteffort-pod27d62ebe_2b95_481d_a9b3_1fc4f1115cb0.slice. May 17 00:21:06.267931 kubelet[2491]: I0517 00:21:06.267899 2491 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 17 00:21:06.267931 kubelet[2491]: I0517 00:21:06.267929 2491 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx4fp\" (UniqueName: \"kubernetes.io/projected/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-kube-api-access-qx4fp\") on node \"localhost\" DevicePath \"\"" May 17 00:21:06.268034 kubelet[2491]: I0517 00:21:06.267942 2491 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 17 00:21:06.754195 kubelet[2491]: I0517 00:21:06.754117 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lmp7t" podStartSLOduration=2.089920388 podStartE2EDuration="20.754101262s" podCreationTimestamp="2025-05-17 00:20:46 +0000 UTC" firstStartedPulling="2025-05-17 00:20:46.691504015 +0000 UTC m=+18.544152484" lastFinishedPulling="2025-05-17 00:21:05.355684889 +0000 UTC m=+37.208333358" observedRunningTime="2025-05-17 00:21:06.752482222 +0000 UTC m=+38.605130691" watchObservedRunningTime="2025-05-17 00:21:06.754101262 +0000 UTC m=+38.606749721" May 17 00:21:06.999822 systemd[1]: Created slice kubepods-besteffort-pod2d7523c0_3762_409a_b3ff_19b0db89e578.slice - libcontainer container kubepods-besteffort-pod2d7523c0_3762_409a_b3ff_19b0db89e578.slice. May 17 00:21:07.175193 kubelet[2491]: I0517 00:21:07.175053 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d7523c0-3762-409a-b3ff-19b0db89e578-whisker-ca-bundle\") pod \"whisker-54654dbf54-bv8kl\" (UID: \"2d7523c0-3762-409a-b3ff-19b0db89e578\") " pod="calico-system/whisker-54654dbf54-bv8kl" May 17 00:21:07.175193 kubelet[2491]: I0517 00:21:07.175096 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w59d\" (UniqueName: \"kubernetes.io/projected/2d7523c0-3762-409a-b3ff-19b0db89e578-kube-api-access-5w59d\") pod \"whisker-54654dbf54-bv8kl\" (UID: \"2d7523c0-3762-409a-b3ff-19b0db89e578\") " pod="calico-system/whisker-54654dbf54-bv8kl" May 17 00:21:07.175193 kubelet[2491]: I0517 00:21:07.175133 2491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2d7523c0-3762-409a-b3ff-19b0db89e578-whisker-backend-key-pair\") pod \"whisker-54654dbf54-bv8kl\" (UID: \"2d7523c0-3762-409a-b3ff-19b0db89e578\") " pod="calico-system/whisker-54654dbf54-bv8kl" May 17 00:21:07.258875 kernel: bpftool[3928]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:21:07.304075 containerd[1460]: time="2025-05-17T00:21:07.304026224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54654dbf54-bv8kl,Uid:2d7523c0-3762-409a-b3ff-19b0db89e578,Namespace:calico-system,Attempt:0,}" May 17 00:21:07.455820 systemd-networkd[1364]: calieeaf45eb3ae: Link UP May 17 00:21:07.457613 systemd-networkd[1364]: calieeaf45eb3ae: Gained carrier May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.382 [INFO][3951] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--54654dbf54--bv8kl-eth0 whisker-54654dbf54- calico-system 2d7523c0-3762-409a-b3ff-19b0db89e578 1004 0 2025-05-17 00:21:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54654dbf54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-54654dbf54-bv8kl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieeaf45eb3ae [] [] }} ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Namespace="calico-system" Pod="whisker-54654dbf54-bv8kl" WorkloadEndpoint="localhost-k8s-whisker--54654dbf54--bv8kl-" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.382 [INFO][3951] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Namespace="calico-system" Pod="whisker-54654dbf54-bv8kl" WorkloadEndpoint="localhost-k8s-whisker--54654dbf54--bv8kl-eth0" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.413 [INFO][3965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" HandleID="k8s-pod-network.d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Workload="localhost-k8s-whisker--54654dbf54--bv8kl-eth0" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.414 [INFO][3965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" HandleID="k8s-pod-network.d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Workload="localhost-k8s-whisker--54654dbf54--bv8kl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005968d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-54654dbf54-bv8kl", "timestamp":"2025-05-17 00:21:07.413752652 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.414 [INFO][3965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.414 [INFO][3965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.414 [INFO][3965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.421 [INFO][3965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" host="localhost" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.427 [INFO][3965] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.431 [INFO][3965] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.433 [INFO][3965] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.435 [INFO][3965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.435 [INFO][3965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" host="localhost" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.436 [INFO][3965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.440 [INFO][3965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" host="localhost" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.445 [INFO][3965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" host="localhost" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.445 [INFO][3965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" host="localhost" May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.445 [INFO][3965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:07.477828 containerd[1460]: 2025-05-17 00:21:07.445 [INFO][3965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" HandleID="k8s-pod-network.d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Workload="localhost-k8s-whisker--54654dbf54--bv8kl-eth0" May 17 00:21:07.478378 containerd[1460]: 2025-05-17 00:21:07.448 [INFO][3951] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Namespace="calico-system" Pod="whisker-54654dbf54-bv8kl" WorkloadEndpoint="localhost-k8s-whisker--54654dbf54--bv8kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54654dbf54--bv8kl-eth0", GenerateName:"whisker-54654dbf54-", Namespace:"calico-system", SelfLink:"", UID:"2d7523c0-3762-409a-b3ff-19b0db89e578", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54654dbf54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-54654dbf54-bv8kl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieeaf45eb3ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:07.478378 containerd[1460]: 2025-05-17 00:21:07.448 [INFO][3951] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Namespace="calico-system" Pod="whisker-54654dbf54-bv8kl" WorkloadEndpoint="localhost-k8s-whisker--54654dbf54--bv8kl-eth0" May 17 00:21:07.478378 containerd[1460]: 2025-05-17 00:21:07.448 [INFO][3951] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieeaf45eb3ae ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Namespace="calico-system" Pod="whisker-54654dbf54-bv8kl" WorkloadEndpoint="localhost-k8s-whisker--54654dbf54--bv8kl-eth0" May 17 00:21:07.478378 containerd[1460]: 2025-05-17 00:21:07.456 [INFO][3951] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Namespace="calico-system" Pod="whisker-54654dbf54-bv8kl" WorkloadEndpoint="localhost-k8s-whisker--54654dbf54--bv8kl-eth0" May 17 00:21:07.478378 containerd[1460]: 2025-05-17 00:21:07.456 [INFO][3951] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Namespace="calico-system" Pod="whisker-54654dbf54-bv8kl" WorkloadEndpoint="localhost-k8s-whisker--54654dbf54--bv8kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54654dbf54--bv8kl-eth0", GenerateName:"whisker-54654dbf54-", Namespace:"calico-system", SelfLink:"", UID:"2d7523c0-3762-409a-b3ff-19b0db89e578", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 21, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54654dbf54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b", Pod:"whisker-54654dbf54-bv8kl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieeaf45eb3ae", MAC:"f2:73:f1:4b:93:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:07.478378 containerd[1460]: 2025-05-17 00:21:07.472 [INFO][3951] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b" Namespace="calico-system" Pod="whisker-54654dbf54-bv8kl" WorkloadEndpoint="localhost-k8s-whisker--54654dbf54--bv8kl-eth0" May 17 00:21:07.515134 containerd[1460]: time="2025-05-17T00:21:07.514940492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:07.515134 containerd[1460]: time="2025-05-17T00:21:07.515075976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:07.515134 containerd[1460]: time="2025-05-17T00:21:07.515089963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:07.515539 containerd[1460]: time="2025-05-17T00:21:07.515443326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:07.524587 systemd-networkd[1364]: vxlan.calico: Link UP May 17 00:21:07.524601 systemd-networkd[1364]: vxlan.calico: Gained carrier May 17 00:21:07.549291 systemd[1]: Started cri-containerd-d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b.scope - libcontainer container d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b. May 17 00:21:07.562224 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:21:07.593649 containerd[1460]: time="2025-05-17T00:21:07.593598148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54654dbf54-bv8kl,Uid:2d7523c0-3762-409a-b3ff-19b0db89e578,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8a944f7f82ff19b28a537b2097ed139b0552115fe3c3dd17cec6757f7a7882b\"" May 17 00:21:07.595463 containerd[1460]: time="2025-05-17T00:21:07.595421882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:21:07.852033 containerd[1460]: time="2025-05-17T00:21:07.851968060Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:07.936290 containerd[1460]: time="2025-05-17T00:21:07.936129918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:07.936444 containerd[1460]: time="2025-05-17T00:21:07.936142933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:21:07.936628 kubelet[2491]: E0517 00:21:07.936564 2491 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:21:07.936757 kubelet[2491]: E0517 00:21:07.936623 2491 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:21:07.937824 kubelet[2491]: E0517 00:21:07.937744 2491 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a6df141da824040bbade8336e585a48,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5w59d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54654dbf54-bv8kl_calico-system(2d7523c0-3762-409a-b3ff-19b0db89e578): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:07.940053 containerd[1460]: time="2025-05-17T00:21:07.940006896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:21:08.161456 containerd[1460]: time="2025-05-17T00:21:08.161308769Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:08.223217 containerd[1460]: time="2025-05-17T00:21:08.223161423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:08.223359 containerd[1460]: time="2025-05-17T00:21:08.223196409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:21:08.223521 kubelet[2491]: E0517 00:21:08.223470 2491 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:21:08.229997 kubelet[2491]: E0517 00:21:08.223529 2491 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:21:08.230043 kubelet[2491]: E0517 00:21:08.223638 2491 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5w59d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54654dbf54-bv8kl_calico-system(2d7523c0-3762-409a-b3ff-19b0db89e578): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:08.230043 kubelet[2491]: E0517 00:21:08.224826 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-54654dbf54-bv8kl" podUID="2d7523c0-3762-409a-b3ff-19b0db89e578" May 17 00:21:08.234929 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:37686.service - OpenSSH per-connection server daemon (10.0.0.1:37686). May 17 00:21:08.236814 kubelet[2491]: I0517 00:21:08.236786 2491 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d62ebe-2b95-481d-a9b3-1fc4f1115cb0" path="/var/lib/kubelet/pods/27d62ebe-2b95-481d-a9b3-1fc4f1115cb0/volumes" May 17 00:21:08.279636 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 37686 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:08.282192 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:08.286711 systemd-logind[1446]: New session 9 of user core. May 17 00:21:08.299019 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:21:08.420337 sshd[4119]: pam_unix(sshd:session): session closed for user core May 17 00:21:08.424940 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:37686.service: Deactivated successfully. May 17 00:21:08.427175 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:21:08.428014 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. May 17 00:21:08.429031 systemd-logind[1446]: Removed session 9. May 17 00:21:08.620970 systemd-networkd[1364]: vxlan.calico: Gained IPv6LL May 17 00:21:08.745261 kubelet[2491]: E0517 00:21:08.744687 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-54654dbf54-bv8kl" podUID="2d7523c0-3762-409a-b3ff-19b0db89e578" May 17 00:21:08.941927 systemd-networkd[1364]: calieeaf45eb3ae: Gained IPv6LL May 17 00:21:09.234548 containerd[1460]: time="2025-05-17T00:21:09.234481588Z" level=info msg="StopPodSandbox for \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\"" May 17 00:21:09.235030 containerd[1460]: time="2025-05-17T00:21:09.234481558Z" level=info msg="StopPodSandbox for \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\"" May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.276 [INFO][4159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.276 [INFO][4159] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" iface="eth0" netns="/var/run/netns/cni-90f8e346-2e41-8113-e461-fd7e8cc1a7eb" May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.276 [INFO][4159] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" iface="eth0" netns="/var/run/netns/cni-90f8e346-2e41-8113-e461-fd7e8cc1a7eb" May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.276 [INFO][4159] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" iface="eth0" netns="/var/run/netns/cni-90f8e346-2e41-8113-e461-fd7e8cc1a7eb" May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.276 [INFO][4159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.276 [INFO][4159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.299 [INFO][4176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" HandleID="k8s-pod-network.21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.299 [INFO][4176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.299 [INFO][4176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.305 [WARNING][4176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" HandleID="k8s-pod-network.21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.306 [INFO][4176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" HandleID="k8s-pod-network.21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.308 [INFO][4176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:09.314890 containerd[1460]: 2025-05-17 00:21:09.310 [INFO][4159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:09.318846 containerd[1460]: time="2025-05-17T00:21:09.315290979Z" level=info msg="TearDown network for sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\" successfully" May 17 00:21:09.318846 containerd[1460]: time="2025-05-17T00:21:09.315325594Z" level=info msg="StopPodSandbox for \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\" returns successfully" May 17 00:21:09.318846 containerd[1460]: time="2025-05-17T00:21:09.318241847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-qqvpp,Uid:cb836777-e67d-4d21-a5e7-16ba9fc2ef39,Namespace:calico-system,Attempt:1,}" May 17 00:21:09.317716 systemd[1]: run-netns-cni\x2d90f8e346\x2d2e41\x2d8113\x2de461\x2dfd7e8cc1a7eb.mount: Deactivated successfully. May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.283 [INFO][4160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.284 [INFO][4160] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" iface="eth0" netns="/var/run/netns/cni-52b3c347-ddd9-6ed9-ce06-aef9b0728a4c" May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.284 [INFO][4160] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" iface="eth0" netns="/var/run/netns/cni-52b3c347-ddd9-6ed9-ce06-aef9b0728a4c" May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.284 [INFO][4160] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" iface="eth0" netns="/var/run/netns/cni-52b3c347-ddd9-6ed9-ce06-aef9b0728a4c" May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.284 [INFO][4160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.285 [INFO][4160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.314 [INFO][4182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" HandleID="k8s-pod-network.8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.314 [INFO][4182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.314 [INFO][4182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.320 [WARNING][4182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" HandleID="k8s-pod-network.8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.320 [INFO][4182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" HandleID="k8s-pod-network.8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.322 [INFO][4182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:09.329125 containerd[1460]: 2025-05-17 00:21:09.326 [INFO][4160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:09.333473 containerd[1460]: time="2025-05-17T00:21:09.333410930Z" level=info msg="TearDown network for sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\" successfully" May 17 00:21:09.333473 containerd[1460]: time="2025-05-17T00:21:09.333457147Z" level=info msg="StopPodSandbox for \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\" returns successfully" May 17 00:21:09.333936 kubelet[2491]: E0517 00:21:09.333898 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:09.334960 containerd[1460]: time="2025-05-17T00:21:09.334736278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f92hb,Uid:8b2899f4-79bc-4fef-b1ad-48d139bf5859,Namespace:kube-system,Attempt:1,}" May 17 00:21:09.340382 systemd[1]: run-netns-cni\x2d52b3c347\x2dddd9\x2d6ed9\x2dce06\x2daef9b0728a4c.mount: Deactivated successfully. May 17 00:21:09.451109 systemd-networkd[1364]: caliaf2dcb77d97: Link UP May 17 00:21:09.451332 systemd-networkd[1364]: caliaf2dcb77d97: Gained carrier May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.377 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0 goldmane-8f77d7b6c- calico-system cb836777-e67d-4d21-a5e7-16ba9fc2ef39 1039 0 2025-05-17 00:20:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-8f77d7b6c-qqvpp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliaf2dcb77d97 [] [] }} ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Namespace="calico-system" Pod="goldmane-8f77d7b6c-qqvpp" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--qqvpp-" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.378 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Namespace="calico-system" Pod="goldmane-8f77d7b6c-qqvpp" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.411 [INFO][4221] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" HandleID="k8s-pod-network.1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.412 [INFO][4221] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" HandleID="k8s-pod-network.1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ef60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-8f77d7b6c-qqvpp", "timestamp":"2025-05-17 00:21:09.411972662 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.412 [INFO][4221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.412 [INFO][4221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.412 [INFO][4221] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.418 [INFO][4221] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" host="localhost" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.425 [INFO][4221] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.430 [INFO][4221] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.432 [INFO][4221] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.436 [INFO][4221] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.436 [INFO][4221] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" host="localhost" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.437 [INFO][4221] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.441 [INFO][4221] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" host="localhost" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.445 [INFO][4221] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" host="localhost" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.445 [INFO][4221] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" host="localhost" May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.445 [INFO][4221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:09.468313 containerd[1460]: 2025-05-17 00:21:09.445 [INFO][4221] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" HandleID="k8s-pod-network.1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:09.468903 containerd[1460]: 2025-05-17 00:21:09.448 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Namespace="calico-system" Pod="goldmane-8f77d7b6c-qqvpp" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"cb836777-e67d-4d21-a5e7-16ba9fc2ef39", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-8f77d7b6c-qqvpp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaf2dcb77d97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:09.468903 containerd[1460]: 2025-05-17 00:21:09.448 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Namespace="calico-system" Pod="goldmane-8f77d7b6c-qqvpp" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:09.468903 containerd[1460]: 2025-05-17 00:21:09.448 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf2dcb77d97 ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Namespace="calico-system" Pod="goldmane-8f77d7b6c-qqvpp" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:09.468903 containerd[1460]: 2025-05-17 00:21:09.451 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Namespace="calico-system" Pod="goldmane-8f77d7b6c-qqvpp" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:09.468903 containerd[1460]: 2025-05-17 00:21:09.454 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Namespace="calico-system" Pod="goldmane-8f77d7b6c-qqvpp" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"cb836777-e67d-4d21-a5e7-16ba9fc2ef39", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa", Pod:"goldmane-8f77d7b6c-qqvpp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaf2dcb77d97", MAC:"d6:10:92:80:9c:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:09.468903 containerd[1460]: 2025-05-17 00:21:09.465 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa" Namespace="calico-system" Pod="goldmane-8f77d7b6c-qqvpp" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:09.487580 containerd[1460]: time="2025-05-17T00:21:09.487385133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:09.487580 containerd[1460]: time="2025-05-17T00:21:09.487451548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:09.487580 containerd[1460]: time="2025-05-17T00:21:09.487466276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:09.487823 containerd[1460]: time="2025-05-17T00:21:09.487578797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:09.508939 systemd[1]: Started cri-containerd-1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa.scope - libcontainer container 1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa. May 17 00:21:09.522192 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:21:09.548634 containerd[1460]: time="2025-05-17T00:21:09.548583274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-qqvpp,Uid:cb836777-e67d-4d21-a5e7-16ba9fc2ef39,Namespace:calico-system,Attempt:1,} returns sandbox id \"1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa\"" May 17 00:21:09.552100 containerd[1460]: time="2025-05-17T00:21:09.552033499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:21:09.553171 systemd-networkd[1364]: cali95fbd00af49: Link UP May 17 00:21:09.553585 systemd-networkd[1364]: cali95fbd00af49: Gained carrier May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.390 [INFO][4203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0 coredns-7c65d6cfc9- kube-system 8b2899f4-79bc-4fef-b1ad-48d139bf5859 1040 0 2025-05-17 00:20:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-f92hb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95fbd00af49 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f92hb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f92hb-" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.390 [INFO][4203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f92hb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.422 [INFO][4227] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" HandleID="k8s-pod-network.71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.422 [INFO][4227] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" HandleID="k8s-pod-network.71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000516150), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-f92hb", "timestamp":"2025-05-17 00:21:09.422412759 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.422 [INFO][4227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.445 [INFO][4227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.445 [INFO][4227] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.519 [INFO][4227] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" host="localhost" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.526 [INFO][4227] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.530 [INFO][4227] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.532 [INFO][4227] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.534 [INFO][4227] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.534 [INFO][4227] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" host="localhost" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.535 [INFO][4227] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0 May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.539 [INFO][4227] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" host="localhost" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.546 [INFO][4227] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" host="localhost" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.546 [INFO][4227] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" host="localhost" May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.546 [INFO][4227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:09.567312 containerd[1460]: 2025-05-17 00:21:09.546 [INFO][4227] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" HandleID="k8s-pod-network.71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:09.568162 containerd[1460]: 2025-05-17 00:21:09.549 [INFO][4203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f92hb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8b2899f4-79bc-4fef-b1ad-48d139bf5859", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-f92hb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95fbd00af49", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:09.568162 containerd[1460]: 2025-05-17 00:21:09.549 [INFO][4203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f92hb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:09.568162 containerd[1460]: 2025-05-17 00:21:09.550 [INFO][4203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95fbd00af49 ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f92hb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:09.568162 containerd[1460]: 2025-05-17 00:21:09.554 [INFO][4203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f92hb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:09.568162 containerd[1460]: 2025-05-17 00:21:09.555 [INFO][4203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f92hb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8b2899f4-79bc-4fef-b1ad-48d139bf5859", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0", Pod:"coredns-7c65d6cfc9-f92hb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95fbd00af49", MAC:"e6:e7:b2:29:66:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:09.568162 containerd[1460]: 2025-05-17 00:21:09.562 [INFO][4203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-f92hb" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:09.589544 containerd[1460]: time="2025-05-17T00:21:09.589228125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:09.589544 containerd[1460]: time="2025-05-17T00:21:09.589306111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:09.589544 containerd[1460]: time="2025-05-17T00:21:09.589317482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:09.589544 containerd[1460]: time="2025-05-17T00:21:09.589412571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:09.611075 systemd[1]: Started cri-containerd-71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0.scope - libcontainer container 71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0. May 17 00:21:09.626184 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:21:09.650194 containerd[1460]: time="2025-05-17T00:21:09.650157209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f92hb,Uid:8b2899f4-79bc-4fef-b1ad-48d139bf5859,Namespace:kube-system,Attempt:1,} returns sandbox id \"71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0\"" May 17 00:21:09.650877 kubelet[2491]: E0517 00:21:09.650843 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:09.652597 containerd[1460]: time="2025-05-17T00:21:09.652503012Z" level=info msg="CreateContainer within sandbox \"71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:21:09.670117 containerd[1460]: time="2025-05-17T00:21:09.670055999Z" level=info msg="CreateContainer within sandbox \"71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9e066626421e79169568e895048253a1677e3d826dab0ac2a0442fceaf5370e\"" May 17 00:21:09.670660 containerd[1460]: time="2025-05-17T00:21:09.670612324Z" level=info msg="StartContainer for \"f9e066626421e79169568e895048253a1677e3d826dab0ac2a0442fceaf5370e\"" May 17 00:21:09.699248 systemd[1]: Started cri-containerd-f9e066626421e79169568e895048253a1677e3d826dab0ac2a0442fceaf5370e.scope - libcontainer container f9e066626421e79169568e895048253a1677e3d826dab0ac2a0442fceaf5370e. May 17 00:21:09.727964 containerd[1460]: time="2025-05-17T00:21:09.727923027Z" level=info msg="StartContainer for \"f9e066626421e79169568e895048253a1677e3d826dab0ac2a0442fceaf5370e\" returns successfully" May 17 00:21:09.749894 kubelet[2491]: E0517 00:21:09.749732 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:09.761277 kubelet[2491]: I0517 00:21:09.760761 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f92hb" podStartSLOduration=35.76074249 podStartE2EDuration="35.76074249s" podCreationTimestamp="2025-05-17 00:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:09.760554167 +0000 UTC m=+41.613202636" watchObservedRunningTime="2025-05-17 00:21:09.76074249 +0000 UTC m=+41.613390959" May 17 00:21:09.779331 containerd[1460]: time="2025-05-17T00:21:09.779288701Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:09.780532 containerd[1460]: time="2025-05-17T00:21:09.780368268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:09.780532 containerd[1460]: time="2025-05-17T00:21:09.780466112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:21:09.780707 kubelet[2491]: E0517 00:21:09.780598 2491 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:21:09.780707 kubelet[2491]: E0517 00:21:09.780648 2491 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:21:09.780851 kubelet[2491]: E0517 00:21:09.780799 2491 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66zkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-qqvpp_calico-system(cb836777-e67d-4d21-a5e7-16ba9fc2ef39): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:09.782001 kubelet[2491]: E0517 00:21:09.781963 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-qqvpp" podUID="cb836777-e67d-4d21-a5e7-16ba9fc2ef39" May 17 00:21:10.234277 containerd[1460]: time="2025-05-17T00:21:10.234203406Z" level=info msg="StopPodSandbox for \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\"" May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.273 [INFO][4393] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.274 [INFO][4393] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" iface="eth0" netns="/var/run/netns/cni-9a1acfa8-feef-bed0-11f5-1770b99f6bad" May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.274 [INFO][4393] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" iface="eth0" netns="/var/run/netns/cni-9a1acfa8-feef-bed0-11f5-1770b99f6bad" May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.274 [INFO][4393] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" iface="eth0" netns="/var/run/netns/cni-9a1acfa8-feef-bed0-11f5-1770b99f6bad" May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.274 [INFO][4393] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.274 [INFO][4393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.295 [INFO][4402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" HandleID="k8s-pod-network.ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.295 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.295 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.300 [WARNING][4402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" HandleID="k8s-pod-network.ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.300 [INFO][4402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" HandleID="k8s-pod-network.ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.302 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:10.307895 containerd[1460]: 2025-05-17 00:21:10.305 [INFO][4393] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:10.308948 containerd[1460]: time="2025-05-17T00:21:10.308903494Z" level=info msg="TearDown network for sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\" successfully" May 17 00:21:10.308948 containerd[1460]: time="2025-05-17T00:21:10.308942577Z" level=info msg="StopPodSandbox for \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\" returns successfully" May 17 00:21:10.309645 containerd[1460]: time="2025-05-17T00:21:10.309620530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbc85d568-vmlk5,Uid:592a7817-1a54-43e6-91e2-b61a4e065de1,Namespace:calico-apiserver,Attempt:1,}" May 17 00:21:10.320146 systemd[1]: run-netns-cni\x2d9a1acfa8\x2dfeef\x2dbed0\x2d11f5\x2d1770b99f6bad.mount: Deactivated successfully. May 17 00:21:10.415420 systemd-networkd[1364]: cali845c7e16501: Link UP May 17 00:21:10.415626 systemd-networkd[1364]: cali845c7e16501: Gained carrier May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.358 [INFO][4412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0 calico-apiserver-dbc85d568- calico-apiserver 592a7817-1a54-43e6-91e2-b61a4e065de1 1062 0 2025-05-17 00:20:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dbc85d568 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dbc85d568-vmlk5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali845c7e16501 [] [] }} ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-vmlk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.358 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-vmlk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.383 [INFO][4426] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" HandleID="k8s-pod-network.6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.383 [INFO][4426] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" HandleID="k8s-pod-network.6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dbc85d568-vmlk5", "timestamp":"2025-05-17 00:21:10.383411481 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.383 [INFO][4426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.383 [INFO][4426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.383 [INFO][4426] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.389 [INFO][4426] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" host="localhost" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.393 [INFO][4426] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.397 [INFO][4426] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.398 [INFO][4426] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.400 [INFO][4426] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.400 [INFO][4426] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" host="localhost" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.401 [INFO][4426] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833 May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.405 [INFO][4426] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" host="localhost" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.409 [INFO][4426] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" host="localhost" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.409 [INFO][4426] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" host="localhost" May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.409 [INFO][4426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:10.429459 containerd[1460]: 2025-05-17 00:21:10.409 [INFO][4426] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" HandleID="k8s-pod-network.6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:10.430401 containerd[1460]: 2025-05-17 00:21:10.413 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-vmlk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0", GenerateName:"calico-apiserver-dbc85d568-", Namespace:"calico-apiserver", SelfLink:"", UID:"592a7817-1a54-43e6-91e2-b61a4e065de1", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbc85d568", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dbc85d568-vmlk5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali845c7e16501", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:10.430401 containerd[1460]: 2025-05-17 00:21:10.413 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-vmlk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:10.430401 containerd[1460]: 2025-05-17 00:21:10.413 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali845c7e16501 ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-vmlk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:10.430401 containerd[1460]: 2025-05-17 00:21:10.415 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-vmlk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:10.430401 containerd[1460]: 2025-05-17 00:21:10.416 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-vmlk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0", GenerateName:"calico-apiserver-dbc85d568-", Namespace:"calico-apiserver", SelfLink:"", UID:"592a7817-1a54-43e6-91e2-b61a4e065de1", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbc85d568", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833", Pod:"calico-apiserver-dbc85d568-vmlk5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali845c7e16501", MAC:"c2:71:78:41:b2:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:10.430401 containerd[1460]: 2025-05-17 00:21:10.426 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-vmlk5" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:10.452207 containerd[1460]: time="2025-05-17T00:21:10.452111727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:10.452207 containerd[1460]: time="2025-05-17T00:21:10.452159767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:10.452207 containerd[1460]: time="2025-05-17T00:21:10.452170677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:10.452438 containerd[1460]: time="2025-05-17T00:21:10.452253282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:10.478902 systemd[1]: Started cri-containerd-6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833.scope - libcontainer container 6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833. May 17 00:21:10.491465 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:21:10.520067 containerd[1460]: time="2025-05-17T00:21:10.520017723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbc85d568-vmlk5,Uid:592a7817-1a54-43e6-91e2-b61a4e065de1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833\"" May 17 00:21:10.521395 containerd[1460]: time="2025-05-17T00:21:10.521368097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:21:10.753868 kubelet[2491]: E0517 00:21:10.753365 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:10.753868 kubelet[2491]: E0517 00:21:10.753408 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-qqvpp" podUID="cb836777-e67d-4d21-a5e7-16ba9fc2ef39" May 17 00:21:11.234975 containerd[1460]: time="2025-05-17T00:21:11.234919751Z" level=info msg="StopPodSandbox for \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\"" May 17 00:21:11.436979 systemd-networkd[1364]: caliaf2dcb77d97: Gained IPv6LL May 17 00:21:11.500968 systemd-networkd[1364]: cali95fbd00af49: Gained IPv6LL May 17 00:21:11.564943 systemd-networkd[1364]: cali845c7e16501: Gained IPv6LL May 17 00:21:11.754605 kubelet[2491]: E0517 00:21:11.754447 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.492 [INFO][4500] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.492 [INFO][4500] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" iface="eth0" netns="/var/run/netns/cni-391a4bc6-1bb2-1454-9876-7b63c5114007" May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.493 [INFO][4500] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" iface="eth0" netns="/var/run/netns/cni-391a4bc6-1bb2-1454-9876-7b63c5114007" May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.493 [INFO][4500] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" iface="eth0" netns="/var/run/netns/cni-391a4bc6-1bb2-1454-9876-7b63c5114007" May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.493 [INFO][4500] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.493 [INFO][4500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.514 [INFO][4509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" HandleID="k8s-pod-network.47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.514 [INFO][4509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.514 [INFO][4509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.532 [WARNING][4509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" HandleID="k8s-pod-network.47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.532 [INFO][4509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" HandleID="k8s-pod-network.47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.915 [INFO][4509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:11.921247 containerd[1460]: 2025-05-17 00:21:11.918 [INFO][4500] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:11.921982 containerd[1460]: time="2025-05-17T00:21:11.921416320Z" level=info msg="TearDown network for sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\" successfully" May 17 00:21:11.921982 containerd[1460]: time="2025-05-17T00:21:11.921440957Z" level=info msg="StopPodSandbox for \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\" returns successfully" May 17 00:21:11.922106 containerd[1460]: time="2025-05-17T00:21:11.922073804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zkcv,Uid:7495b3bd-a626-4600-9f8e-cc5963e6df5a,Namespace:calico-system,Attempt:1,}" May 17 00:21:11.924166 systemd[1]: run-netns-cni\x2d391a4bc6\x2d1bb2\x2d1454\x2d9876\x2d7b63c5114007.mount: Deactivated successfully. May 17 00:21:12.234749 containerd[1460]: time="2025-05-17T00:21:12.234307978Z" level=info msg="StopPodSandbox for \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\"" May 17 00:21:12.234749 containerd[1460]: time="2025-05-17T00:21:12.234391916Z" level=info msg="StopPodSandbox for \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\"" May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.484 [INFO][4538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.485 [INFO][4538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" iface="eth0" netns="/var/run/netns/cni-abb0559b-d177-086f-b74e-b71300366b3d" May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.485 [INFO][4538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" iface="eth0" netns="/var/run/netns/cni-abb0559b-d177-086f-b74e-b71300366b3d" May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.486 [INFO][4538] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" iface="eth0" netns="/var/run/netns/cni-abb0559b-d177-086f-b74e-b71300366b3d" May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.486 [INFO][4538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.486 [INFO][4538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.505 [INFO][4554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" HandleID="k8s-pod-network.bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.506 [INFO][4554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.506 [INFO][4554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.525 [WARNING][4554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" HandleID="k8s-pod-network.bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.525 [INFO][4554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" HandleID="k8s-pod-network.bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.702 [INFO][4554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:12.706750 containerd[1460]: 2025-05-17 00:21:12.704 [INFO][4538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:12.707265 containerd[1460]: time="2025-05-17T00:21:12.706924660Z" level=info msg="TearDown network for sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\" successfully" May 17 00:21:12.707265 containerd[1460]: time="2025-05-17T00:21:12.706949266Z" level=info msg="StopPodSandbox for \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\" returns successfully" May 17 00:21:12.707695 containerd[1460]: time="2025-05-17T00:21:12.707652937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbc85d568-xxnxn,Uid:dcbcb463-2034-44b0-98b4-0b1740b2500e,Namespace:calico-apiserver,Attempt:1,}" May 17 00:21:12.710552 systemd[1]: run-netns-cni\x2dabb0559b\x2dd177\x2d086f\x2db74e\x2db71300366b3d.mount: Deactivated successfully. May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.484 [INFO][4539] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.485 [INFO][4539] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" iface="eth0" netns="/var/run/netns/cni-b353d99a-5b76-eb0a-6153-86fd9006e16a" May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.486 [INFO][4539] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" iface="eth0" netns="/var/run/netns/cni-b353d99a-5b76-eb0a-6153-86fd9006e16a" May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.486 [INFO][4539] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" iface="eth0" netns="/var/run/netns/cni-b353d99a-5b76-eb0a-6153-86fd9006e16a" May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.486 [INFO][4539] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.486 [INFO][4539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.513 [INFO][4556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" HandleID="k8s-pod-network.4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.513 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.702 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.708 [WARNING][4556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" HandleID="k8s-pod-network.4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.708 [INFO][4556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" HandleID="k8s-pod-network.4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.709 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:12.716701 containerd[1460]: 2025-05-17 00:21:12.714 [INFO][4539] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:12.717535 containerd[1460]: time="2025-05-17T00:21:12.716959023Z" level=info msg="TearDown network for sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\" successfully" May 17 00:21:12.717535 containerd[1460]: time="2025-05-17T00:21:12.716979722Z" level=info msg="StopPodSandbox for \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\" returns successfully" May 17 00:21:12.717535 containerd[1460]: time="2025-05-17T00:21:12.717504777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9df4f78-vb7v8,Uid:7b05a098-fd89-437b-9657-38da60548e2f,Namespace:calico-system,Attempt:1,}" May 17 00:21:12.719475 systemd[1]: run-netns-cni\x2db353d99a\x2d5b76\x2deb0a\x2d6153\x2d86fd9006e16a.mount: Deactivated successfully. May 17 00:21:12.774916 kubelet[2491]: E0517 00:21:12.774319 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:12.899715 systemd-networkd[1364]: calie8bfa7df6f0: Link UP May 17 00:21:12.903816 systemd-networkd[1364]: calie8bfa7df6f0: Gained carrier May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.808 [INFO][4570] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4zkcv-eth0 csi-node-driver- calico-system 7495b3bd-a626-4600-9f8e-cc5963e6df5a 1089 0 2025-05-17 00:20:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4zkcv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie8bfa7df6f0 [] [] }} ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Namespace="calico-system" Pod="csi-node-driver-4zkcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zkcv-" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.809 [INFO][4570] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Namespace="calico-system" Pod="csi-node-driver-4zkcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.843 [INFO][4584] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" HandleID="k8s-pod-network.5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.843 [INFO][4584] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" HandleID="k8s-pod-network.5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4zkcv", "timestamp":"2025-05-17 00:21:12.843081218 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.843 [INFO][4584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.843 [INFO][4584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.843 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.852 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" host="localhost" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.856 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.865 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.867 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.870 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.870 [INFO][4584] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" host="localhost" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.874 [INFO][4584] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94 May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.878 [INFO][4584] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" host="localhost" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.887 [INFO][4584] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" host="localhost" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.887 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" host="localhost" May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.887 [INFO][4584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:12.920273 containerd[1460]: 2025-05-17 00:21:12.888 [INFO][4584] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" HandleID="k8s-pod-network.5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:12.920888 containerd[1460]: 2025-05-17 00:21:12.892 [INFO][4570] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Namespace="calico-system" Pod="csi-node-driver-4zkcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zkcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4zkcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7495b3bd-a626-4600-9f8e-cc5963e6df5a", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4zkcv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8bfa7df6f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:12.920888 containerd[1460]: 2025-05-17 00:21:12.892 [INFO][4570] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Namespace="calico-system" Pod="csi-node-driver-4zkcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:12.920888 containerd[1460]: 2025-05-17 00:21:12.892 [INFO][4570] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8bfa7df6f0 ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Namespace="calico-system" Pod="csi-node-driver-4zkcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:12.920888 containerd[1460]: 2025-05-17 00:21:12.901 [INFO][4570] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Namespace="calico-system" Pod="csi-node-driver-4zkcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:12.920888 containerd[1460]: 2025-05-17 00:21:12.901 [INFO][4570] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Namespace="calico-system" Pod="csi-node-driver-4zkcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zkcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4zkcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7495b3bd-a626-4600-9f8e-cc5963e6df5a", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94", Pod:"csi-node-driver-4zkcv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8bfa7df6f0", MAC:"22:bf:79:dc:e3:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:12.920888 containerd[1460]: 2025-05-17 00:21:12.913 [INFO][4570] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94" Namespace="calico-system" Pod="csi-node-driver-4zkcv" WorkloadEndpoint="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:12.984622 containerd[1460]: time="2025-05-17T00:21:12.984428907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:12.984622 containerd[1460]: time="2025-05-17T00:21:12.984507314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:12.984622 containerd[1460]: time="2025-05-17T00:21:12.984534115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:12.985654 containerd[1460]: time="2025-05-17T00:21:12.985486893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:13.014902 systemd[1]: Started cri-containerd-5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94.scope - libcontainer container 5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94. May 17 00:21:13.026439 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:21:13.038522 containerd[1460]: time="2025-05-17T00:21:13.038476731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4zkcv,Uid:7495b3bd-a626-4600-9f8e-cc5963e6df5a,Namespace:calico-system,Attempt:1,} returns sandbox id \"5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94\"" May 17 00:21:13.234057 containerd[1460]: time="2025-05-17T00:21:13.233955330Z" level=info msg="StopPodSandbox for \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\"" May 17 00:21:13.303444 systemd-networkd[1364]: calia688318cc1b: Link UP May 17 00:21:13.307911 systemd-networkd[1364]: calia688318cc1b: Gained carrier May 17 00:21:13.437201 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:37688.service - OpenSSH per-connection server daemon (10.0.0.1:37688). May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:12.866 [INFO][4596] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0 calico-apiserver-dbc85d568- calico-apiserver dcbcb463-2034-44b0-98b4-0b1740b2500e 1096 0 2025-05-17 00:20:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dbc85d568 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-dbc85d568-xxnxn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia688318cc1b [] [] }} ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-xxnxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:12.866 [INFO][4596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-xxnxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:12.902 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" HandleID="k8s-pod-network.ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:12.902 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" HandleID="k8s-pod-network.ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-dbc85d568-xxnxn", "timestamp":"2025-05-17 00:21:12.902246157 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:12.902 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:12.902 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:12.902 [INFO][4633] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:12.953 [INFO][4633] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" host="localhost" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.148 [INFO][4633] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.182 [INFO][4633] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.184 [INFO][4633] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.187 [INFO][4633] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.187 [INFO][4633] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" host="localhost" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.192 [INFO][4633] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651 May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.197 [INFO][4633] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" host="localhost" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.277 [INFO][4633] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" host="localhost" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.277 [INFO][4633] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" host="localhost" May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.277 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:13.472871 containerd[1460]: 2025-05-17 00:21:13.277 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" HandleID="k8s-pod-network.ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:13.473429 containerd[1460]: 2025-05-17 00:21:13.284 [INFO][4596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-xxnxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0", GenerateName:"calico-apiserver-dbc85d568-", Namespace:"calico-apiserver", SelfLink:"", UID:"dcbcb463-2034-44b0-98b4-0b1740b2500e", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbc85d568", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-dbc85d568-xxnxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia688318cc1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:13.473429 containerd[1460]: 2025-05-17 00:21:13.285 [INFO][4596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-xxnxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:13.473429 containerd[1460]: 2025-05-17 00:21:13.285 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia688318cc1b ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-xxnxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:13.473429 containerd[1460]: 2025-05-17 00:21:13.308 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-xxnxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:13.473429 containerd[1460]: 2025-05-17 00:21:13.309 [INFO][4596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-xxnxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0", GenerateName:"calico-apiserver-dbc85d568-", Namespace:"calico-apiserver", SelfLink:"", UID:"dcbcb463-2034-44b0-98b4-0b1740b2500e", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbc85d568", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651", Pod:"calico-apiserver-dbc85d568-xxnxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia688318cc1b", MAC:"92:ef:91:31:ac:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:13.473429 containerd[1460]: 2025-05-17 00:21:13.469 [INFO][4596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651" Namespace="calico-apiserver" Pod="calico-apiserver-dbc85d568-xxnxn" WorkloadEndpoint="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:13.510856 containerd[1460]: time="2025-05-17T00:21:13.507341079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:13.510856 containerd[1460]: time="2025-05-17T00:21:13.507407424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:13.510856 containerd[1460]: time="2025-05-17T00:21:13.507426960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:13.510856 containerd[1460]: time="2025-05-17T00:21:13.507513332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:13.516880 sshd[4733]: Accepted publickey for core from 10.0.0.1 port 37688 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:13.519594 sshd[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:13.532462 systemd-networkd[1364]: calie7d0be5d8a6: Link UP May 17 00:21:13.533908 systemd[1]: Started cri-containerd-ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651.scope - libcontainer container ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651. May 17 00:21:13.536317 systemd-networkd[1364]: calie7d0be5d8a6: Gained carrier May 17 00:21:13.540950 systemd-logind[1446]: New session 10 of user core. May 17 00:21:13.550171 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:12.880 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0 calico-kube-controllers-5d9df4f78- calico-system 7b05a098-fd89-437b-9657-38da60548e2f 1097 0 2025-05-17 00:20:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d9df4f78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5d9df4f78-vb7v8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie7d0be5d8a6 [] [] }} ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Namespace="calico-system" Pod="calico-kube-controllers-5d9df4f78-vb7v8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:12.880 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Namespace="calico-system" Pod="calico-kube-controllers-5d9df4f78-vb7v8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:12.923 [INFO][4640] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" HandleID="k8s-pod-network.4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:12.924 [INFO][4640] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" HandleID="k8s-pod-network.4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b2890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5d9df4f78-vb7v8", "timestamp":"2025-05-17 00:21:12.923899607 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:12.924 [INFO][4640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.278 [INFO][4640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.278 [INFO][4640] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.383 [INFO][4640] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" host="localhost" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.470 [INFO][4640] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.476 [INFO][4640] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.479 [INFO][4640] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.481 [INFO][4640] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.481 [INFO][4640] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" host="localhost" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.486 [INFO][4640] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.493 [INFO][4640] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" host="localhost" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.510 [INFO][4640] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" host="localhost" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.510 [INFO][4640] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" host="localhost" May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.510 [INFO][4640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:13.578411 containerd[1460]: 2025-05-17 00:21:13.510 [INFO][4640] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" HandleID="k8s-pod-network.4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:13.579118 containerd[1460]: 2025-05-17 00:21:13.525 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Namespace="calico-system" Pod="calico-kube-controllers-5d9df4f78-vb7v8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0", GenerateName:"calico-kube-controllers-5d9df4f78-", Namespace:"calico-system", SelfLink:"", UID:"7b05a098-fd89-437b-9657-38da60548e2f", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9df4f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5d9df4f78-vb7v8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie7d0be5d8a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:13.579118 containerd[1460]: 2025-05-17 00:21:13.525 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Namespace="calico-system" Pod="calico-kube-controllers-5d9df4f78-vb7v8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:13.579118 containerd[1460]: 2025-05-17 00:21:13.525 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7d0be5d8a6 ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Namespace="calico-system" Pod="calico-kube-controllers-5d9df4f78-vb7v8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:13.579118 containerd[1460]: 2025-05-17 00:21:13.545 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Namespace="calico-system" Pod="calico-kube-controllers-5d9df4f78-vb7v8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:13.579118 containerd[1460]: 2025-05-17 00:21:13.545 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Namespace="calico-system" Pod="calico-kube-controllers-5d9df4f78-vb7v8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0", GenerateName:"calico-kube-controllers-5d9df4f78-", Namespace:"calico-system", SelfLink:"", UID:"7b05a098-fd89-437b-9657-38da60548e2f", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9df4f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de", Pod:"calico-kube-controllers-5d9df4f78-vb7v8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie7d0be5d8a6", MAC:"f6:90:43:79:20:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:13.579118 containerd[1460]: 2025-05-17 00:21:13.563 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de" Namespace="calico-system" Pod="calico-kube-controllers-5d9df4f78-vb7v8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:13.579369 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.397 [INFO][4713] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.397 [INFO][4713] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" iface="eth0" netns="/var/run/netns/cni-900e607f-38b1-3348-ba86-f16fd0f0f4a2" May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.397 [INFO][4713] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" iface="eth0" netns="/var/run/netns/cni-900e607f-38b1-3348-ba86-f16fd0f0f4a2" May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.397 [INFO][4713] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" iface="eth0" netns="/var/run/netns/cni-900e607f-38b1-3348-ba86-f16fd0f0f4a2" May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.397 [INFO][4713] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.398 [INFO][4713] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.418 [INFO][4724] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" HandleID="k8s-pod-network.423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.418 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.510 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.539 [WARNING][4724] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" HandleID="k8s-pod-network.423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.540 [INFO][4724] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" HandleID="k8s-pod-network.423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.560 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:13.580956 containerd[1460]: 2025-05-17 00:21:13.576 [INFO][4713] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:13.581665 containerd[1460]: time="2025-05-17T00:21:13.581632163Z" level=info msg="TearDown network for sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\" successfully" May 17 00:21:13.581665 containerd[1460]: time="2025-05-17T00:21:13.581662299Z" level=info msg="StopPodSandbox for \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\" returns successfully" May 17 00:21:13.582088 kubelet[2491]: E0517 00:21:13.582000 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:13.584825 containerd[1460]: time="2025-05-17T00:21:13.584165808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-75lgr,Uid:c5389095-df96-41b4-8890-9e655dbc39b6,Namespace:kube-system,Attempt:1,}" May 17 00:21:13.633859 containerd[1460]: time="2025-05-17T00:21:13.633811609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dbc85d568-xxnxn,Uid:dcbcb463-2034-44b0-98b4-0b1740b2500e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651\"" May 17 00:21:13.736071 sshd[4733]: pam_unix(sshd:session): session closed for user core May 17 00:21:13.741643 containerd[1460]: time="2025-05-17T00:21:13.741474873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:13.741643 containerd[1460]: time="2025-05-17T00:21:13.741578418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:13.741643 containerd[1460]: time="2025-05-17T00:21:13.741605739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:13.742715 containerd[1460]: time="2025-05-17T00:21:13.742638327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:13.745979 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:37688.service: Deactivated successfully. May 17 00:21:13.747603 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:21:13.749855 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. May 17 00:21:13.758940 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:37696.service - OpenSSH per-connection server daemon (10.0.0.1:37696). May 17 00:21:13.762495 systemd-logind[1446]: Removed session 10. May 17 00:21:13.763960 systemd[1]: Started cri-containerd-4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de.scope - libcontainer container 4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de. May 17 00:21:13.783919 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:21:13.796708 sshd[4831]: Accepted publickey for core from 10.0.0.1 port 37696 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:13.798480 sshd[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:13.805417 systemd-logind[1446]: New session 11 of user core. May 17 00:21:13.811961 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:21:13.835050 containerd[1460]: time="2025-05-17T00:21:13.834758190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9df4f78-vb7v8,Uid:7b05a098-fd89-437b-9657-38da60548e2f,Namespace:calico-system,Attempt:1,} returns sandbox id \"4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de\"" May 17 00:21:13.913132 systemd-networkd[1364]: cali3cc3b1be1a6: Link UP May 17 00:21:13.914535 systemd-networkd[1364]: cali3cc3b1be1a6: Gained carrier May 17 00:21:13.930710 systemd[1]: run-netns-cni\x2d900e607f\x2d38b1\x2d3348\x2dba86\x2df16fd0f0f4a2.mount: Deactivated successfully. May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.830 [INFO][4849] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0 coredns-7c65d6cfc9- kube-system c5389095-df96-41b4-8890-9e655dbc39b6 1111 0 2025-05-17 00:20:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-75lgr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3cc3b1be1a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-75lgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--75lgr-" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.830 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-75lgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.863 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" HandleID="k8s-pod-network.3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.863 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" HandleID="k8s-pod-network.3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001395b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-75lgr", "timestamp":"2025-05-17 00:21:13.863274457 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.863 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.863 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.863 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.869 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" host="localhost" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.879 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.883 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.884 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.886 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.887 [INFO][4871] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" host="localhost" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.888 [INFO][4871] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.894 [INFO][4871] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" host="localhost" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.901 [INFO][4871] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" host="localhost" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.901 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" host="localhost" May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.902 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:13.938316 containerd[1460]: 2025-05-17 00:21:13.902 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" HandleID="k8s-pod-network.3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:13.939597 containerd[1460]: 2025-05-17 00:21:13.905 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-75lgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c5389095-df96-41b4-8890-9e655dbc39b6", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-75lgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cc3b1be1a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:13.939597 containerd[1460]: 2025-05-17 00:21:13.906 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-75lgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:13.939597 containerd[1460]: 2025-05-17 00:21:13.906 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cc3b1be1a6 ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-75lgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:13.939597 containerd[1460]: 2025-05-17 00:21:13.913 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-75lgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:13.939597 containerd[1460]: 2025-05-17 00:21:13.914 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-75lgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c5389095-df96-41b4-8890-9e655dbc39b6", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e", Pod:"coredns-7c65d6cfc9-75lgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cc3b1be1a6", MAC:"da:18:79:45:9d:b7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:13.939597 containerd[1460]: 2025-05-17 00:21:13.927 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-75lgr" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:13.977870 containerd[1460]: time="2025-05-17T00:21:13.975674361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:21:13.977870 containerd[1460]: time="2025-05-17T00:21:13.975739142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:21:13.977870 containerd[1460]: time="2025-05-17T00:21:13.975756825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:13.977870 containerd[1460]: time="2025-05-17T00:21:13.975892420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:21:13.998843 systemd[1]: run-containerd-runc-k8s.io-3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e-runc.rujc3o.mount: Deactivated successfully. May 17 00:21:14.006953 systemd[1]: Started cri-containerd-3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e.scope - libcontainer container 3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e. May 17 00:21:14.024415 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 17 00:21:14.028302 sshd[4831]: pam_unix(sshd:session): session closed for user core May 17 00:21:14.036417 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:37696.service: Deactivated successfully. May 17 00:21:14.039265 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:21:14.042304 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. May 17 00:21:14.050224 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:37710.service - OpenSSH per-connection server daemon (10.0.0.1:37710). May 17 00:21:14.051099 systemd-logind[1446]: Removed session 11. May 17 00:21:14.065632 containerd[1460]: time="2025-05-17T00:21:14.065232843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-75lgr,Uid:c5389095-df96-41b4-8890-9e655dbc39b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e\"" May 17 00:21:14.068523 kubelet[2491]: E0517 00:21:14.068363 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:14.073197 containerd[1460]: time="2025-05-17T00:21:14.072791719Z" level=info msg="CreateContainer within sandbox \"3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:21:14.092219 containerd[1460]: time="2025-05-17T00:21:14.092147062Z" level=info msg="CreateContainer within sandbox \"3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b4aa50420c982e0074641ae325b870837aec300ffcd4306ef668e785df2e7ce5\"" May 17 00:21:14.093246 containerd[1460]: time="2025-05-17T00:21:14.093217811Z" level=info msg="StartContainer for \"b4aa50420c982e0074641ae325b870837aec300ffcd4306ef668e785df2e7ce5\"" May 17 00:21:14.095190 sshd[4935]: Accepted publickey for core from 10.0.0.1 port 37710 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:14.097579 sshd[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:14.103043 systemd-logind[1446]: New session 12 of user core. May 17 00:21:14.107382 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:21:14.126928 systemd[1]: Started cri-containerd-b4aa50420c982e0074641ae325b870837aec300ffcd4306ef668e785df2e7ce5.scope - libcontainer container b4aa50420c982e0074641ae325b870837aec300ffcd4306ef668e785df2e7ce5. May 17 00:21:14.201906 containerd[1460]: time="2025-05-17T00:21:14.201676502Z" level=info msg="StartContainer for \"b4aa50420c982e0074641ae325b870837aec300ffcd4306ef668e785df2e7ce5\" returns successfully" May 17 00:21:14.262696 sshd[4935]: pam_unix(sshd:session): session closed for user core May 17 00:21:14.267791 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:37710.service: Deactivated successfully. May 17 00:21:14.271250 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:21:14.273895 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. May 17 00:21:14.276261 systemd-logind[1446]: Removed session 12. May 17 00:21:14.627945 containerd[1460]: time="2025-05-17T00:21:14.627885841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:14.628923 containerd[1460]: time="2025-05-17T00:21:14.628874596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=47252431" May 17 00:21:14.630370 containerd[1460]: time="2025-05-17T00:21:14.630329747Z" level=info msg="ImageCreate event name:\"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:14.633007 containerd[1460]: time="2025-05-17T00:21:14.632943953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:14.633645 containerd[1460]: time="2025-05-17T00:21:14.633606046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 4.112203745s" May 17 00:21:14.633713 containerd[1460]: time="2025-05-17T00:21:14.633643797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:21:14.635970 containerd[1460]: time="2025-05-17T00:21:14.635887648Z" level=info msg="CreateContainer within sandbox \"6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:21:14.648750 containerd[1460]: time="2025-05-17T00:21:14.648702427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:21:14.650359 containerd[1460]: time="2025-05-17T00:21:14.650322957Z" level=info msg="CreateContainer within sandbox \"6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ef57d9c619ea1e6156bbbb79ad369d3af305da7182a922e655ab647f14c75486\"" May 17 00:21:14.652295 containerd[1460]: time="2025-05-17T00:21:14.651155059Z" level=info msg="StartContainer for \"ef57d9c619ea1e6156bbbb79ad369d3af305da7182a922e655ab647f14c75486\"" May 17 00:21:14.692932 systemd[1]: Started cri-containerd-ef57d9c619ea1e6156bbbb79ad369d3af305da7182a922e655ab647f14c75486.scope - libcontainer container ef57d9c619ea1e6156bbbb79ad369d3af305da7182a922e655ab647f14c75486. May 17 00:21:14.854484 containerd[1460]: time="2025-05-17T00:21:14.854411503Z" level=info msg="StartContainer for \"ef57d9c619ea1e6156bbbb79ad369d3af305da7182a922e655ab647f14c75486\" returns successfully" May 17 00:21:14.858213 kubelet[2491]: E0517 00:21:14.858158 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:14.872749 kubelet[2491]: I0517 00:21:14.872678 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-75lgr" podStartSLOduration=40.872661862 podStartE2EDuration="40.872661862s" podCreationTimestamp="2025-05-17 00:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:21:14.872198061 +0000 UTC m=+46.724846530" watchObservedRunningTime="2025-05-17 00:21:14.872661862 +0000 UTC m=+46.725310331" May 17 00:21:14.957004 systemd-networkd[1364]: calie8bfa7df6f0: Gained IPv6LL May 17 00:21:15.149028 systemd-networkd[1364]: calia688318cc1b: Gained IPv6LL May 17 00:21:15.597670 systemd-networkd[1364]: calie7d0be5d8a6: Gained IPv6LL May 17 00:21:15.788911 systemd-networkd[1364]: cali3cc3b1be1a6: Gained IPv6LL May 17 00:21:15.870012 kubelet[2491]: E0517 00:21:15.869880 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:16.548951 kubelet[2491]: I0517 00:21:16.548811 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dbc85d568-vmlk5" podStartSLOduration=29.435421722 podStartE2EDuration="33.548792246s" podCreationTimestamp="2025-05-17 00:20:43 +0000 UTC" firstStartedPulling="2025-05-17 00:21:10.521163653 +0000 UTC m=+42.373812122" lastFinishedPulling="2025-05-17 00:21:14.634534167 +0000 UTC m=+46.487182646" observedRunningTime="2025-05-17 00:21:14.922335903 +0000 UTC m=+46.774984372" watchObservedRunningTime="2025-05-17 00:21:16.548792246 +0000 UTC m=+48.401440715" May 17 00:21:16.809270 containerd[1460]: time="2025-05-17T00:21:16.809167272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:16.810154 containerd[1460]: time="2025-05-17T00:21:16.810107746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:21:16.811135 containerd[1460]: time="2025-05-17T00:21:16.811108734Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:16.815710 containerd[1460]: time="2025-05-17T00:21:16.813838507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:16.815710 containerd[1460]: time="2025-05-17T00:21:16.814550974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 2.165808071s" May 17 00:21:16.815710 containerd[1460]: time="2025-05-17T00:21:16.814588434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:21:16.816559 containerd[1460]: time="2025-05-17T00:21:16.816534416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 00:21:16.818308 containerd[1460]: time="2025-05-17T00:21:16.818276705Z" level=info msg="CreateContainer within sandbox \"5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:21:16.834403 containerd[1460]: time="2025-05-17T00:21:16.834375455Z" level=info msg="CreateContainer within sandbox \"5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3c7a290e7052156373e98cf4ac7c6c2f99ad4cd857ef86ba11d5dbe691bc643e\"" May 17 00:21:16.835048 containerd[1460]: time="2025-05-17T00:21:16.834931499Z" level=info msg="StartContainer for \"3c7a290e7052156373e98cf4ac7c6c2f99ad4cd857ef86ba11d5dbe691bc643e\"" May 17 00:21:16.867973 systemd[1]: Started cri-containerd-3c7a290e7052156373e98cf4ac7c6c2f99ad4cd857ef86ba11d5dbe691bc643e.scope - libcontainer container 3c7a290e7052156373e98cf4ac7c6c2f99ad4cd857ef86ba11d5dbe691bc643e. May 17 00:21:16.873175 kubelet[2491]: E0517 00:21:16.873141 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:16.896144 containerd[1460]: time="2025-05-17T00:21:16.896097961Z" level=info msg="StartContainer for \"3c7a290e7052156373e98cf4ac7c6c2f99ad4cd857ef86ba11d5dbe691bc643e\" returns successfully" May 17 00:21:17.195988 containerd[1460]: time="2025-05-17T00:21:17.195946660Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:17.196745 containerd[1460]: time="2025-05-17T00:21:17.196693652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=77" May 17 00:21:17.198599 containerd[1460]: time="2025-05-17T00:21:17.198564292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"48745150\" in 381.997795ms" May 17 00:21:17.198599 containerd[1460]: time="2025-05-17T00:21:17.198593316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:5fa544b30bbe7e24458b21b80890f8834eebe8bcb99071f6caded1a39fc59082\"" May 17 00:21:17.199398 containerd[1460]: time="2025-05-17T00:21:17.199370425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 00:21:17.200307 containerd[1460]: time="2025-05-17T00:21:17.200283268Z" level=info msg="CreateContainer within sandbox \"ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 00:21:17.217582 containerd[1460]: time="2025-05-17T00:21:17.217535381Z" level=info msg="CreateContainer within sandbox \"ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f13366d33e22d6694d87ff1d0ff73ce14019650dcffa08de6d19686ee76ee675\"" May 17 00:21:17.218238 containerd[1460]: time="2025-05-17T00:21:17.218204427Z" level=info msg="StartContainer for \"f13366d33e22d6694d87ff1d0ff73ce14019650dcffa08de6d19686ee76ee675\"" May 17 00:21:17.247847 systemd[1]: Started cri-containerd-f13366d33e22d6694d87ff1d0ff73ce14019650dcffa08de6d19686ee76ee675.scope - libcontainer container f13366d33e22d6694d87ff1d0ff73ce14019650dcffa08de6d19686ee76ee675. May 17 00:21:17.292373 containerd[1460]: time="2025-05-17T00:21:17.292326567Z" level=info msg="StartContainer for \"f13366d33e22d6694d87ff1d0ff73ce14019650dcffa08de6d19686ee76ee675\" returns successfully" May 17 00:21:17.903353 kubelet[2491]: I0517 00:21:17.903259 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-dbc85d568-xxnxn" podStartSLOduration=31.339242725 podStartE2EDuration="34.903243978s" podCreationTimestamp="2025-05-17 00:20:43 +0000 UTC" firstStartedPulling="2025-05-17 00:21:13.635255268 +0000 UTC m=+45.487903737" lastFinishedPulling="2025-05-17 00:21:17.199256521 +0000 UTC m=+49.051904990" observedRunningTime="2025-05-17 00:21:17.902889353 +0000 UTC m=+49.755537822" watchObservedRunningTime="2025-05-17 00:21:17.903243978 +0000 UTC m=+49.755892447" May 17 00:21:18.895548 kubelet[2491]: I0517 00:21:18.895507 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:21:19.285032 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:50008.service - OpenSSH per-connection server daemon (10.0.0.1:50008). May 17 00:21:19.329309 sshd[5148]: Accepted publickey for core from 10.0.0.1 port 50008 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:19.330135 sshd[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:19.336662 systemd-logind[1446]: New session 13 of user core. May 17 00:21:19.343985 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:21:19.479445 sshd[5148]: pam_unix(sshd:session): session closed for user core May 17 00:21:19.482523 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:50008.service: Deactivated successfully. May 17 00:21:19.484927 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:21:19.488482 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. May 17 00:21:19.490132 systemd-logind[1446]: Removed session 13. May 17 00:21:19.913622 containerd[1460]: time="2025-05-17T00:21:19.913568803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:19.914449 containerd[1460]: time="2025-05-17T00:21:19.914410533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=51178512" May 17 00:21:19.915658 containerd[1460]: time="2025-05-17T00:21:19.915604753Z" level=info msg="ImageCreate event name:\"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:19.917680 containerd[1460]: time="2025-05-17T00:21:19.917651033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:19.918511 containerd[1460]: time="2025-05-17T00:21:19.918482925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"52671183\" in 2.719083796s" May 17 00:21:19.918575 containerd[1460]: time="2025-05-17T00:21:19.918514805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:094053209304a3d20e6561c18d37ac2dc4c7fbb68c1579d9864c303edebffa50\"" May 17 00:21:19.919404 containerd[1460]: time="2025-05-17T00:21:19.919285079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:21:19.928017 containerd[1460]: time="2025-05-17T00:21:19.927962371Z" level=info msg="CreateContainer within sandbox \"4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 00:21:19.943807 containerd[1460]: time="2025-05-17T00:21:19.943760896Z" level=info msg="CreateContainer within sandbox \"4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"61769cc62497b1f3c016bd8e71406c8fae8a52067434d1e32f5c10afe724a29a\"" May 17 00:21:19.944269 containerd[1460]: time="2025-05-17T00:21:19.944245846Z" level=info msg="StartContainer for \"61769cc62497b1f3c016bd8e71406c8fae8a52067434d1e32f5c10afe724a29a\"" May 17 00:21:19.995910 systemd[1]: Started cri-containerd-61769cc62497b1f3c016bd8e71406c8fae8a52067434d1e32f5c10afe724a29a.scope - libcontainer container 61769cc62497b1f3c016bd8e71406c8fae8a52067434d1e32f5c10afe724a29a. May 17 00:21:20.036409 containerd[1460]: time="2025-05-17T00:21:20.036372300Z" level=info msg="StartContainer for \"61769cc62497b1f3c016bd8e71406c8fae8a52067434d1e32f5c10afe724a29a\" returns successfully" May 17 00:21:20.985122 kubelet[2491]: I0517 00:21:20.985037 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d9df4f78-vb7v8" podStartSLOduration=28.90263996 podStartE2EDuration="34.9844443s" podCreationTimestamp="2025-05-17 00:20:46 +0000 UTC" firstStartedPulling="2025-05-17 00:21:13.837317603 +0000 UTC m=+45.689966072" lastFinishedPulling="2025-05-17 00:21:19.919121943 +0000 UTC m=+51.771770412" observedRunningTime="2025-05-17 00:21:20.936594587 +0000 UTC m=+52.789243056" watchObservedRunningTime="2025-05-17 00:21:20.9844443 +0000 UTC m=+52.837092759" May 17 00:21:21.973974 containerd[1460]: time="2025-05-17T00:21:21.973905913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:21.975070 containerd[1460]: time="2025-05-17T00:21:21.975026737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:21:21.976292 containerd[1460]: time="2025-05-17T00:21:21.976247337Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:21.978541 containerd[1460]: time="2025-05-17T00:21:21.978507698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:21:21.979131 containerd[1460]: time="2025-05-17T00:21:21.979093918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 2.059781497s" May 17 00:21:21.979131 containerd[1460]: time="2025-05-17T00:21:21.979122702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:21:21.981371 containerd[1460]: time="2025-05-17T00:21:21.981328812Z" level=info msg="CreateContainer within sandbox \"5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:21:21.997390 containerd[1460]: time="2025-05-17T00:21:21.997350072Z" level=info msg="CreateContainer within sandbox \"5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4362ff333089016b501aa73a82aa69837a22bac842db0116bd091298e9da5047\"" May 17 00:21:21.997915 containerd[1460]: time="2025-05-17T00:21:21.997880758Z" level=info msg="StartContainer for \"4362ff333089016b501aa73a82aa69837a22bac842db0116bd091298e9da5047\"" May 17 00:21:22.029931 systemd[1]: Started cri-containerd-4362ff333089016b501aa73a82aa69837a22bac842db0116bd091298e9da5047.scope - libcontainer container 4362ff333089016b501aa73a82aa69837a22bac842db0116bd091298e9da5047. May 17 00:21:22.060630 containerd[1460]: time="2025-05-17T00:21:22.060592462Z" level=info msg="StartContainer for \"4362ff333089016b501aa73a82aa69837a22bac842db0116bd091298e9da5047\" returns successfully" May 17 00:21:22.521202 kubelet[2491]: I0517 00:21:22.521172 2491 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:21:22.521202 kubelet[2491]: I0517 00:21:22.521204 2491 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:21:22.934698 kubelet[2491]: I0517 00:21:22.934574 2491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4zkcv" podStartSLOduration=27.994241513 podStartE2EDuration="36.934558607s" podCreationTimestamp="2025-05-17 00:20:46 +0000 UTC" firstStartedPulling="2025-05-17 00:21:13.039761542 +0000 UTC m=+44.892410011" lastFinishedPulling="2025-05-17 00:21:21.980078636 +0000 UTC m=+53.832727105" observedRunningTime="2025-05-17 00:21:22.933894951 +0000 UTC m=+54.786543450" watchObservedRunningTime="2025-05-17 00:21:22.934558607 +0000 UTC m=+54.787207086" May 17 00:21:24.235151 containerd[1460]: time="2025-05-17T00:21:24.235046714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:21:24.491258 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:50018.service - OpenSSH per-connection server daemon (10.0.0.1:50018). May 17 00:21:24.499645 containerd[1460]: time="2025-05-17T00:21:24.499596034Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:24.500909 containerd[1460]: time="2025-05-17T00:21:24.500842503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:24.500974 containerd[1460]: time="2025-05-17T00:21:24.500890623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:21:24.501176 kubelet[2491]: E0517 00:21:24.501119 2491 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:21:24.501543 kubelet[2491]: E0517 00:21:24.501180 2491 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:21:24.501543 kubelet[2491]: E0517 00:21:24.501295 2491 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a6df141da824040bbade8336e585a48,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5w59d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54654dbf54-bv8kl_calico-system(2d7523c0-3762-409a-b3ff-19b0db89e578): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:24.503857 containerd[1460]: time="2025-05-17T00:21:24.503824347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:21:24.540320 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 50018 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:24.542311 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:24.546477 systemd-logind[1446]: New session 14 of user core. May 17 00:21:24.555905 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:21:24.702790 sshd[5283]: pam_unix(sshd:session): session closed for user core May 17 00:21:24.706896 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:50018.service: Deactivated successfully. May 17 00:21:24.709081 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:21:24.709691 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. May 17 00:21:24.710509 systemd-logind[1446]: Removed session 14. May 17 00:21:24.736416 containerd[1460]: time="2025-05-17T00:21:24.736365052Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:24.737473 containerd[1460]: time="2025-05-17T00:21:24.737439708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:24.737556 containerd[1460]: time="2025-05-17T00:21:24.737522934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:21:24.737811 kubelet[2491]: E0517 00:21:24.737732 2491 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:21:24.737920 kubelet[2491]: E0517 00:21:24.737819 2491 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:21:24.738028 kubelet[2491]: E0517 00:21:24.737974 2491 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5w59d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54654dbf54-bv8kl_calico-system(2d7523c0-3762-409a-b3ff-19b0db89e578): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:24.739204 kubelet[2491]: E0517 00:21:24.739156 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-54654dbf54-bv8kl" podUID="2d7523c0-3762-409a-b3ff-19b0db89e578" May 17 00:21:26.235510 containerd[1460]: time="2025-05-17T00:21:26.235234901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:21:26.493082 containerd[1460]: time="2025-05-17T00:21:26.492919437Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:26.494396 containerd[1460]: time="2025-05-17T00:21:26.494348427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:26.494396 containerd[1460]: time="2025-05-17T00:21:26.494371140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:21:26.494634 kubelet[2491]: E0517 00:21:26.494590 2491 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:21:26.495023 kubelet[2491]: E0517 00:21:26.494647 2491 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:21:26.495023 kubelet[2491]: E0517 00:21:26.494827 2491 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66zkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-qqvpp_calico-system(cb836777-e67d-4d21-a5e7-16ba9fc2ef39): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:26.496061 kubelet[2491]: E0517 00:21:26.496026 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-qqvpp" podUID="cb836777-e67d-4d21-a5e7-16ba9fc2ef39" May 17 00:21:28.224829 containerd[1460]: time="2025-05-17T00:21:28.224784525Z" level=info msg="StopPodSandbox for \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\"" May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.266 [WARNING][5312] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0", GenerateName:"calico-apiserver-dbc85d568-", Namespace:"calico-apiserver", SelfLink:"", UID:"dcbcb463-2034-44b0-98b4-0b1740b2500e", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbc85d568", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651", Pod:"calico-apiserver-dbc85d568-xxnxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia688318cc1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.266 [INFO][5312] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.266 [INFO][5312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" iface="eth0" netns="" May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.266 [INFO][5312] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.266 [INFO][5312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.292 [INFO][5323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" HandleID="k8s-pod-network.bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.292 [INFO][5323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.292 [INFO][5323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.297 [WARNING][5323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" HandleID="k8s-pod-network.bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.297 [INFO][5323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" HandleID="k8s-pod-network.bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.299 [INFO][5323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:28.305731 containerd[1460]: 2025-05-17 00:21:28.302 [INFO][5312] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:28.306349 containerd[1460]: time="2025-05-17T00:21:28.306301138Z" level=info msg="TearDown network for sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\" successfully" May 17 00:21:28.306349 containerd[1460]: time="2025-05-17T00:21:28.306339089Z" level=info msg="StopPodSandbox for \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\" returns successfully" May 17 00:21:28.307147 containerd[1460]: time="2025-05-17T00:21:28.307110486Z" level=info msg="RemovePodSandbox for \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\"" May 17 00:21:28.309250 containerd[1460]: time="2025-05-17T00:21:28.309222028Z" level=info msg="Forcibly stopping sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\"" May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.341 [WARNING][5341] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0", GenerateName:"calico-apiserver-dbc85d568-", Namespace:"calico-apiserver", SelfLink:"", UID:"dcbcb463-2034-44b0-98b4-0b1740b2500e", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbc85d568", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab4b50a9f9a0eb09c8c14d26c00dde2d3b230f11a531a67187cdfba38bfb3651", Pod:"calico-apiserver-dbc85d568-xxnxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia688318cc1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.341 [INFO][5341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.341 [INFO][5341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" iface="eth0" netns="" May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.341 [INFO][5341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.341 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.360 [INFO][5349] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" HandleID="k8s-pod-network.bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.360 [INFO][5349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.360 [INFO][5349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.366 [WARNING][5349] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" HandleID="k8s-pod-network.bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.366 [INFO][5349] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" HandleID="k8s-pod-network.bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" Workload="localhost-k8s-calico--apiserver--dbc85d568--xxnxn-eth0" May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.368 [INFO][5349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:28.374199 containerd[1460]: 2025-05-17 00:21:28.371 [INFO][5341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7" May 17 00:21:28.374611 containerd[1460]: time="2025-05-17T00:21:28.374234828Z" level=info msg="TearDown network for sandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\" successfully" May 17 00:21:28.432641 containerd[1460]: time="2025-05-17T00:21:28.432560713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:21:28.432641 containerd[1460]: time="2025-05-17T00:21:28.432636144Z" level=info msg="RemovePodSandbox \"bac5b7d3c140b06cd62b39550d6ec61821f5af784f3c58e9deeb6bc6eeb68eb7\" returns successfully" May 17 00:21:28.433164 containerd[1460]: time="2025-05-17T00:21:28.433138017Z" level=info msg="StopPodSandbox for \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\"" May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.469 [WARNING][5367] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0", GenerateName:"calico-kube-controllers-5d9df4f78-", Namespace:"calico-system", SelfLink:"", UID:"7b05a098-fd89-437b-9657-38da60548e2f", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9df4f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de", Pod:"calico-kube-controllers-5d9df4f78-vb7v8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie7d0be5d8a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.469 [INFO][5367] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.469 [INFO][5367] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" iface="eth0" netns="" May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.469 [INFO][5367] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.469 [INFO][5367] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.489 [INFO][5376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" HandleID="k8s-pod-network.4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.489 [INFO][5376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.489 [INFO][5376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.495 [WARNING][5376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" HandleID="k8s-pod-network.4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.495 [INFO][5376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" HandleID="k8s-pod-network.4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.496 [INFO][5376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:28.502854 containerd[1460]: 2025-05-17 00:21:28.499 [INFO][5367] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:28.502854 containerd[1460]: time="2025-05-17T00:21:28.502802503Z" level=info msg="TearDown network for sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\" successfully" May 17 00:21:28.502854 containerd[1460]: time="2025-05-17T00:21:28.502831197Z" level=info msg="StopPodSandbox for \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\" returns successfully" May 17 00:21:28.504149 containerd[1460]: time="2025-05-17T00:21:28.503468503Z" level=info msg="RemovePodSandbox for \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\"" May 17 00:21:28.504149 containerd[1460]: time="2025-05-17T00:21:28.503514269Z" level=info msg="Forcibly stopping sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\"" May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.539 [WARNING][5394] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0", GenerateName:"calico-kube-controllers-5d9df4f78-", Namespace:"calico-system", SelfLink:"", UID:"7b05a098-fd89-437b-9657-38da60548e2f", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9df4f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fa3d6e7a81aa9c43d4161b34e8493cff6b63c66a2d5b3d7f9023a68d635f6de", Pod:"calico-kube-controllers-5d9df4f78-vb7v8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie7d0be5d8a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.540 [INFO][5394] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.540 [INFO][5394] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" iface="eth0" netns="" May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.540 [INFO][5394] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.540 [INFO][5394] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.562 [INFO][5402] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" HandleID="k8s-pod-network.4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.562 [INFO][5402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.562 [INFO][5402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.628 [WARNING][5402] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" HandleID="k8s-pod-network.4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.628 [INFO][5402] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" HandleID="k8s-pod-network.4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" Workload="localhost-k8s-calico--kube--controllers--5d9df4f78--vb7v8-eth0" May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.630 [INFO][5402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:28.636501 containerd[1460]: 2025-05-17 00:21:28.633 [INFO][5394] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94" May 17 00:21:28.636983 containerd[1460]: time="2025-05-17T00:21:28.636541991Z" level=info msg="TearDown network for sandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\" successfully" May 17 00:21:28.641077 containerd[1460]: time="2025-05-17T00:21:28.641031875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:21:28.641129 containerd[1460]: time="2025-05-17T00:21:28.641096626Z" level=info msg="RemovePodSandbox \"4bd2fc6f9e2d91d0cffee661023fa1c8718850bf98aa265eb5f1563b10e81a94\" returns successfully" May 17 00:21:28.641605 containerd[1460]: time="2025-05-17T00:21:28.641556699Z" level=info msg="StopPodSandbox for \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\"" May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.675 [WARNING][5420] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c5389095-df96-41b4-8890-9e655dbc39b6", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e", Pod:"coredns-7c65d6cfc9-75lgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cc3b1be1a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.676 [INFO][5420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.676 [INFO][5420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" iface="eth0" netns="" May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.676 [INFO][5420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.676 [INFO][5420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.697 [INFO][5429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" HandleID="k8s-pod-network.423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.697 [INFO][5429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.697 [INFO][5429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.702 [WARNING][5429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" HandleID="k8s-pod-network.423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.702 [INFO][5429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" HandleID="k8s-pod-network.423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.704 [INFO][5429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:28.709968 containerd[1460]: 2025-05-17 00:21:28.707 [INFO][5420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:28.710400 containerd[1460]: time="2025-05-17T00:21:28.709994817Z" level=info msg="TearDown network for sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\" successfully" May 17 00:21:28.710400 containerd[1460]: time="2025-05-17T00:21:28.710019112Z" level=info msg="StopPodSandbox for \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\" returns successfully" May 17 00:21:28.710631 containerd[1460]: time="2025-05-17T00:21:28.710575987Z" level=info msg="RemovePodSandbox for \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\"" May 17 00:21:28.710675 containerd[1460]: time="2025-05-17T00:21:28.710630880Z" level=info msg="Forcibly stopping sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\"" May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.744 [WARNING][5448] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c5389095-df96-41b4-8890-9e655dbc39b6", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3702ff6cccb2e4ba0d0b357b91a10d80edc94734a13b8331a2659d63a784334e", Pod:"coredns-7c65d6cfc9-75lgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cc3b1be1a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.745 [INFO][5448] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.745 [INFO][5448] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" iface="eth0" netns="" May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.745 [INFO][5448] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.745 [INFO][5448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.764 [INFO][5457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" HandleID="k8s-pod-network.423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.764 [INFO][5457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.764 [INFO][5457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.770 [WARNING][5457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" HandleID="k8s-pod-network.423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.770 [INFO][5457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" HandleID="k8s-pod-network.423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" Workload="localhost-k8s-coredns--7c65d6cfc9--75lgr-eth0" May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.772 [INFO][5457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:28.778055 containerd[1460]: 2025-05-17 00:21:28.774 [INFO][5448] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a" May 17 00:21:28.778055 containerd[1460]: time="2025-05-17T00:21:28.778000111Z" level=info msg="TearDown network for sandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\" successfully" May 17 00:21:28.791876 containerd[1460]: time="2025-05-17T00:21:28.791828795Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:21:28.791876 containerd[1460]: time="2025-05-17T00:21:28.791889328Z" level=info msg="RemovePodSandbox \"423a6df39dbad4fa1c5026e12de48d90eac210cb41d9fda577d181404f13003a\" returns successfully" May 17 00:21:28.792307 containerd[1460]: time="2025-05-17T00:21:28.792280342Z" level=info msg="StopPodSandbox for \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\"" May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.825 [WARNING][5475] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8b2899f4-79bc-4fef-b1ad-48d139bf5859", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0", Pod:"coredns-7c65d6cfc9-f92hb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95fbd00af49", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.825 [INFO][5475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.825 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" iface="eth0" netns="" May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.825 [INFO][5475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.825 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.847 [INFO][5484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" HandleID="k8s-pod-network.8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.847 [INFO][5484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.847 [INFO][5484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.853 [WARNING][5484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" HandleID="k8s-pod-network.8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.853 [INFO][5484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" HandleID="k8s-pod-network.8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.854 [INFO][5484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:28.860471 containerd[1460]: 2025-05-17 00:21:28.857 [INFO][5475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:28.860958 containerd[1460]: time="2025-05-17T00:21:28.860508464Z" level=info msg="TearDown network for sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\" successfully" May 17 00:21:28.860958 containerd[1460]: time="2025-05-17T00:21:28.860534453Z" level=info msg="StopPodSandbox for \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\" returns successfully" May 17 00:21:28.861140 containerd[1460]: time="2025-05-17T00:21:28.861112989Z" level=info msg="RemovePodSandbox for \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\"" May 17 00:21:28.861140 containerd[1460]: time="2025-05-17T00:21:28.861145801Z" level=info msg="Forcibly stopping sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\"" May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.893 [WARNING][5502] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8b2899f4-79bc-4fef-b1ad-48d139bf5859", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71056d33bb2e175cdd2e369ef42a716f820f236b17beffb7b90abcf8671a50d0", Pod:"coredns-7c65d6cfc9-f92hb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95fbd00af49", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.893 [INFO][5502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.893 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" iface="eth0" netns="" May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.893 [INFO][5502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.893 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.911 [INFO][5511] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" HandleID="k8s-pod-network.8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.912 [INFO][5511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.912 [INFO][5511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.917 [WARNING][5511] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" HandleID="k8s-pod-network.8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.917 [INFO][5511] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" HandleID="k8s-pod-network.8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" Workload="localhost-k8s-coredns--7c65d6cfc9--f92hb-eth0" May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.918 [INFO][5511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:28.923470 containerd[1460]: 2025-05-17 00:21:28.920 [INFO][5502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824" May 17 00:21:28.923892 containerd[1460]: time="2025-05-17T00:21:28.923516755Z" level=info msg="TearDown network for sandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\" successfully" May 17 00:21:28.928498 containerd[1460]: time="2025-05-17T00:21:28.928468465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:21:28.928548 containerd[1460]: time="2025-05-17T00:21:28.928518178Z" level=info msg="RemovePodSandbox \"8a420be6b499a0ebb00eab7aff15834687ae4c293339df685fc0d82327994824\" returns successfully" May 17 00:21:28.929102 containerd[1460]: time="2025-05-17T00:21:28.928922386Z" level=info msg="StopPodSandbox for \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\"" May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.959 [WARNING][5529] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" WorkloadEndpoint="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.959 [INFO][5529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.959 [INFO][5529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" iface="eth0" netns="" May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.959 [INFO][5529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.959 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.979 [INFO][5537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" HandleID="k8s-pod-network.a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" Workload="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.979 [INFO][5537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.979 [INFO][5537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.984 [WARNING][5537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" HandleID="k8s-pod-network.a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" Workload="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.984 [INFO][5537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" HandleID="k8s-pod-network.a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" Workload="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.985 [INFO][5537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:28.990868 containerd[1460]: 2025-05-17 00:21:28.988 [INFO][5529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:28.991388 containerd[1460]: time="2025-05-17T00:21:28.990904681Z" level=info msg="TearDown network for sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\" successfully" May 17 00:21:28.991388 containerd[1460]: time="2025-05-17T00:21:28.990928596Z" level=info msg="StopPodSandbox for \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\" returns successfully" May 17 00:21:28.991388 containerd[1460]: time="2025-05-17T00:21:28.991376877Z" level=info msg="RemovePodSandbox for \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\"" May 17 00:21:28.991458 containerd[1460]: time="2025-05-17T00:21:28.991407665Z" level=info msg="Forcibly stopping sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\"" May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.025 [WARNING][5554] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" WorkloadEndpoint="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.026 [INFO][5554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.026 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" iface="eth0" netns="" May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.026 [INFO][5554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.026 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.047 [INFO][5573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" HandleID="k8s-pod-network.a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" Workload="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.047 [INFO][5573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.047 [INFO][5573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.052 [WARNING][5573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" HandleID="k8s-pod-network.a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" Workload="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.052 [INFO][5573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" HandleID="k8s-pod-network.a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" Workload="localhost-k8s-whisker--579785955b--6wshv-eth0" May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.053 [INFO][5573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:29.058982 containerd[1460]: 2025-05-17 00:21:29.055 [INFO][5554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3" May 17 00:21:29.058982 containerd[1460]: time="2025-05-17T00:21:29.058959717Z" level=info msg="TearDown network for sandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\" successfully" May 17 00:21:29.086105 containerd[1460]: time="2025-05-17T00:21:29.086044570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:21:29.086105 containerd[1460]: time="2025-05-17T00:21:29.086121194Z" level=info msg="RemovePodSandbox \"a69b059a354bcc07ebe6970e612a846034ceee9099eb0519803860706dc3c3f3\" returns successfully" May 17 00:21:29.086582 containerd[1460]: time="2025-05-17T00:21:29.086559907Z" level=info msg="StopPodSandbox for \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\"" May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.122 [WARNING][5603] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4zkcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7495b3bd-a626-4600-9f8e-cc5963e6df5a", ResourceVersion:"1230", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94", Pod:"csi-node-driver-4zkcv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8bfa7df6f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.122 [INFO][5603] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.122 [INFO][5603] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" iface="eth0" netns="" May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.122 [INFO][5603] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.122 [INFO][5603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.144 [INFO][5612] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" HandleID="k8s-pod-network.47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.145 [INFO][5612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.145 [INFO][5612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.149 [WARNING][5612] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" HandleID="k8s-pod-network.47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.149 [INFO][5612] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" HandleID="k8s-pod-network.47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.150 [INFO][5612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:29.156009 containerd[1460]: 2025-05-17 00:21:29.153 [INFO][5603] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:29.156521 containerd[1460]: time="2025-05-17T00:21:29.156046569Z" level=info msg="TearDown network for sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\" successfully" May 17 00:21:29.156521 containerd[1460]: time="2025-05-17T00:21:29.156070855Z" level=info msg="StopPodSandbox for \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\" returns successfully" May 17 00:21:29.156585 containerd[1460]: time="2025-05-17T00:21:29.156535256Z" level=info msg="RemovePodSandbox for \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\"" May 17 00:21:29.156585 containerd[1460]: time="2025-05-17T00:21:29.156562346Z" level=info msg="Forcibly stopping sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\"" May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.188 [WARNING][5629] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4zkcv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7495b3bd-a626-4600-9f8e-cc5963e6df5a", ResourceVersion:"1230", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5770c75d83afe11b8f56cbac288218a08a96361ff8bf1eb291c501ad312c1d94", Pod:"csi-node-driver-4zkcv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie8bfa7df6f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.188 [INFO][5629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.188 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" iface="eth0" netns="" May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.188 [INFO][5629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.188 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.208 [INFO][5638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" HandleID="k8s-pod-network.47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.208 [INFO][5638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.208 [INFO][5638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.214 [WARNING][5638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" HandleID="k8s-pod-network.47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.214 [INFO][5638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" HandleID="k8s-pod-network.47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" Workload="localhost-k8s-csi--node--driver--4zkcv-eth0" May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.215 [INFO][5638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:29.220719 containerd[1460]: 2025-05-17 00:21:29.217 [INFO][5629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c" May 17 00:21:29.221248 containerd[1460]: time="2025-05-17T00:21:29.220782358Z" level=info msg="TearDown network for sandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\" successfully" May 17 00:21:29.224888 containerd[1460]: time="2025-05-17T00:21:29.224859167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:21:29.225191 containerd[1460]: time="2025-05-17T00:21:29.224908269Z" level=info msg="RemovePodSandbox \"47d8294dbab4ff0ef0c36d2dda749771d57f310ec49b6cefeec29f5b0398e51c\" returns successfully" May 17 00:21:29.225317 containerd[1460]: time="2025-05-17T00:21:29.225296257Z" level=info msg="StopPodSandbox for \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\"" May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.256 [WARNING][5656] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0", GenerateName:"calico-apiserver-dbc85d568-", Namespace:"calico-apiserver", SelfLink:"", UID:"592a7817-1a54-43e6-91e2-b61a4e065de1", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbc85d568", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833", Pod:"calico-apiserver-dbc85d568-vmlk5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali845c7e16501", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.256 [INFO][5656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.256 [INFO][5656] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" iface="eth0" netns="" May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.256 [INFO][5656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.256 [INFO][5656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.281 [INFO][5666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" HandleID="k8s-pod-network.ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.281 [INFO][5666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.281 [INFO][5666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.286 [WARNING][5666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" HandleID="k8s-pod-network.ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.286 [INFO][5666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" HandleID="k8s-pod-network.ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.288 [INFO][5666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:29.293330 containerd[1460]: 2025-05-17 00:21:29.290 [INFO][5656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:29.293826 containerd[1460]: time="2025-05-17T00:21:29.293373615Z" level=info msg="TearDown network for sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\" successfully" May 17 00:21:29.293826 containerd[1460]: time="2025-05-17T00:21:29.293405034Z" level=info msg="StopPodSandbox for \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\" returns successfully" May 17 00:21:29.293990 containerd[1460]: time="2025-05-17T00:21:29.293953854Z" level=info msg="RemovePodSandbox for \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\"" May 17 00:21:29.293990 containerd[1460]: time="2025-05-17T00:21:29.293983980Z" level=info msg="Forcibly stopping sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\"" May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.324 [WARNING][5683] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0", GenerateName:"calico-apiserver-dbc85d568-", Namespace:"calico-apiserver", SelfLink:"", UID:"592a7817-1a54-43e6-91e2-b61a4e065de1", ResourceVersion:"1167", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dbc85d568", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b7ea4369232e6737135d11a8068c920484f7215ec349da35816c1e1061cd833", Pod:"calico-apiserver-dbc85d568-vmlk5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali845c7e16501", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.324 [INFO][5683] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.324 [INFO][5683] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" iface="eth0" netns="" May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.324 [INFO][5683] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.324 [INFO][5683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.346 [INFO][5692] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" HandleID="k8s-pod-network.ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.346 [INFO][5692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.346 [INFO][5692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.350 [WARNING][5692] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" HandleID="k8s-pod-network.ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.350 [INFO][5692] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" HandleID="k8s-pod-network.ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" Workload="localhost-k8s-calico--apiserver--dbc85d568--vmlk5-eth0" May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.351 [INFO][5692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:29.357146 containerd[1460]: 2025-05-17 00:21:29.354 [INFO][5683] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486" May 17 00:21:29.357677 containerd[1460]: time="2025-05-17T00:21:29.357648640Z" level=info msg="TearDown network for sandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\" successfully" May 17 00:21:29.361771 containerd[1460]: time="2025-05-17T00:21:29.361741798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:21:29.361879 containerd[1460]: time="2025-05-17T00:21:29.361855722Z" level=info msg="RemovePodSandbox \"ef265c3863c03851102bae3f71f654ef482f3d406dcf6fc467a317bfa299b486\" returns successfully" May 17 00:21:29.362311 containerd[1460]: time="2025-05-17T00:21:29.362283214Z" level=info msg="StopPodSandbox for \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\"" May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.393 [WARNING][5710] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"cb836777-e67d-4d21-a5e7-16ba9fc2ef39", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa", Pod:"goldmane-8f77d7b6c-qqvpp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaf2dcb77d97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.393 [INFO][5710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.393 [INFO][5710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" iface="eth0" netns="" May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.394 [INFO][5710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.394 [INFO][5710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.413 [INFO][5720] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" HandleID="k8s-pod-network.21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.413 [INFO][5720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.413 [INFO][5720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.418 [WARNING][5720] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" HandleID="k8s-pod-network.21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.418 [INFO][5720] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" HandleID="k8s-pod-network.21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.419 [INFO][5720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:29.424933 containerd[1460]: 2025-05-17 00:21:29.422 [INFO][5710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:29.425464 containerd[1460]: time="2025-05-17T00:21:29.424977333Z" level=info msg="TearDown network for sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\" successfully" May 17 00:21:29.425464 containerd[1460]: time="2025-05-17T00:21:29.425005766Z" level=info msg="StopPodSandbox for \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\" returns successfully" May 17 00:21:29.425521 containerd[1460]: time="2025-05-17T00:21:29.425457874Z" level=info msg="RemovePodSandbox for \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\"" May 17 00:21:29.425521 containerd[1460]: time="2025-05-17T00:21:29.425485055Z" level=info msg="Forcibly stopping sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\"" May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.457 [WARNING][5737] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"cb836777-e67d-4d21-a5e7-16ba9fc2ef39", ResourceVersion:"1246", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 20, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b901cb996c4649f25241e57973fdff75d6ce1b009e4f595514967db927a34aa", Pod:"goldmane-8f77d7b6c-qqvpp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaf2dcb77d97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.457 [INFO][5737] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.457 [INFO][5737] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" iface="eth0" netns="" May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.457 [INFO][5737] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.457 [INFO][5737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.477 [INFO][5745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" HandleID="k8s-pod-network.21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.477 [INFO][5745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.477 [INFO][5745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.483 [WARNING][5745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" HandleID="k8s-pod-network.21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.483 [INFO][5745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" HandleID="k8s-pod-network.21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" Workload="localhost-k8s-goldmane--8f77d7b6c--qqvpp-eth0" May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.484 [INFO][5745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:21:29.489755 containerd[1460]: 2025-05-17 00:21:29.486 [INFO][5737] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13" May 17 00:21:29.490214 containerd[1460]: time="2025-05-17T00:21:29.489812548Z" level=info msg="TearDown network for sandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\" successfully" May 17 00:21:29.498881 containerd[1460]: time="2025-05-17T00:21:29.498855694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:21:29.498957 containerd[1460]: time="2025-05-17T00:21:29.498902822Z" level=info msg="RemovePodSandbox \"21e1b006df2ad533f7980f98b8fd55ee98c16723932b36582ba178d2c128ab13\" returns successfully" May 17 00:21:29.716606 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:44068.service - OpenSSH per-connection server daemon (10.0.0.1:44068). May 17 00:21:29.758546 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 44068 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:29.760185 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:29.764495 systemd-logind[1446]: New session 15 of user core. May 17 00:21:29.776999 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:21:29.916349 sshd[5754]: pam_unix(sshd:session): session closed for user core May 17 00:21:29.920851 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:44068.service: Deactivated successfully. May 17 00:21:29.923019 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:21:29.923725 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. May 17 00:21:29.924674 systemd-logind[1446]: Removed session 15. May 17 00:21:34.932039 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:44084.service - OpenSSH per-connection server daemon (10.0.0.1:44084). May 17 00:21:34.971792 sshd[5774]: Accepted publickey for core from 10.0.0.1 port 44084 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:34.973487 sshd[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:34.978380 systemd-logind[1446]: New session 16 of user core. May 17 00:21:34.987990 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:21:35.114603 sshd[5774]: pam_unix(sshd:session): session closed for user core May 17 00:21:35.123212 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:44084.service: Deactivated successfully. May 17 00:21:35.125293 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:21:35.126939 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. May 17 00:21:35.136104 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:44088.service - OpenSSH per-connection server daemon (10.0.0.1:44088). May 17 00:21:35.137357 systemd-logind[1446]: Removed session 16. May 17 00:21:35.175629 sshd[5789]: Accepted publickey for core from 10.0.0.1 port 44088 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:35.177819 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:35.183072 systemd-logind[1446]: New session 17 of user core. May 17 00:21:35.189959 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:21:35.559315 sshd[5789]: pam_unix(sshd:session): session closed for user core May 17 00:21:35.568717 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:44088.service: Deactivated successfully. May 17 00:21:35.570523 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:21:35.572296 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. May 17 00:21:35.582019 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:44100.service - OpenSSH per-connection server daemon (10.0.0.1:44100). May 17 00:21:35.582980 systemd-logind[1446]: Removed session 17. May 17 00:21:35.617926 sshd[5801]: Accepted publickey for core from 10.0.0.1 port 44100 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:35.619323 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:35.623240 systemd-logind[1446]: New session 18 of user core. May 17 00:21:35.632915 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:21:37.238380 kubelet[2491]: E0517 00:21:37.238266 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-54654dbf54-bv8kl" podUID="2d7523c0-3762-409a-b3ff-19b0db89e578" May 17 00:21:37.579678 sshd[5801]: pam_unix(sshd:session): session closed for user core May 17 00:21:37.590849 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:44100.service: Deactivated successfully. May 17 00:21:37.593209 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:21:37.596881 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. May 17 00:21:37.606082 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:44104.service - OpenSSH per-connection server daemon (10.0.0.1:44104). May 17 00:21:37.609225 systemd-logind[1446]: Removed session 18. May 17 00:21:37.661479 sshd[5821]: Accepted publickey for core from 10.0.0.1 port 44104 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:37.663364 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:37.668411 systemd-logind[1446]: New session 19 of user core. May 17 00:21:37.682015 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:21:38.145031 sshd[5821]: pam_unix(sshd:session): session closed for user core May 17 00:21:38.153467 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:44104.service: Deactivated successfully. May 17 00:21:38.155301 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:21:38.156348 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. May 17 00:21:38.157982 systemd-logind[1446]: Removed session 19. May 17 00:21:38.166917 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:41466.service - OpenSSH per-connection server daemon (10.0.0.1:41466). May 17 00:21:38.200390 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 41466 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:38.202510 sshd[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:38.206879 systemd-logind[1446]: New session 20 of user core. May 17 00:21:38.213978 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:21:38.329845 sshd[5856]: pam_unix(sshd:session): session closed for user core May 17 00:21:38.334214 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:41466.service: Deactivated successfully. May 17 00:21:38.336404 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:21:38.337388 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. May 17 00:21:38.338509 systemd-logind[1446]: Removed session 20. May 17 00:21:41.234495 kubelet[2491]: E0517 00:21:41.234451 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:42.235238 kubelet[2491]: E0517 00:21:42.235166 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-qqvpp" podUID="cb836777-e67d-4d21-a5e7-16ba9fc2ef39" May 17 00:21:42.686639 kubelet[2491]: I0517 00:21:42.686603 2491 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:21:43.345514 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:41482.service - OpenSSH per-connection server daemon (10.0.0.1:41482). May 17 00:21:43.384956 sshd[5876]: Accepted publickey for core from 10.0.0.1 port 41482 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:43.386700 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:43.400612 systemd-logind[1446]: New session 21 of user core. May 17 00:21:43.405622 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:21:43.524400 sshd[5876]: pam_unix(sshd:session): session closed for user core May 17 00:21:43.528195 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:41482.service: Deactivated successfully. May 17 00:21:43.530301 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:21:43.530919 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. May 17 00:21:43.531953 systemd-logind[1446]: Removed session 21. May 17 00:21:48.540724 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:51352.service - OpenSSH per-connection server daemon (10.0.0.1:51352). May 17 00:21:48.578988 sshd[5897]: Accepted publickey for core from 10.0.0.1 port 51352 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:48.580853 sshd[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:48.585080 systemd-logind[1446]: New session 22 of user core. May 17 00:21:48.591038 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:21:48.708298 sshd[5897]: pam_unix(sshd:session): session closed for user core May 17 00:21:48.712457 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:51352.service: Deactivated successfully. May 17 00:21:48.714796 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:21:48.715511 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. May 17 00:21:48.716617 systemd-logind[1446]: Removed session 22. May 17 00:21:49.235502 containerd[1460]: time="2025-05-17T00:21:49.235458370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 00:21:49.457813 containerd[1460]: time="2025-05-17T00:21:49.457733750Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:49.476607 containerd[1460]: time="2025-05-17T00:21:49.476552015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 17 00:21:49.476675 containerd[1460]: time="2025-05-17T00:21:49.476553578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:49.476888 kubelet[2491]: E0517 00:21:49.476828 2491 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:21:49.477252 kubelet[2491]: E0517 00:21:49.476894 2491 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 00:21:49.477252 kubelet[2491]: E0517 00:21:49.477058 2491 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9a6df141da824040bbade8336e585a48,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5w59d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54654dbf54-bv8kl_calico-system(2d7523c0-3762-409a-b3ff-19b0db89e578): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:49.479106 containerd[1460]: time="2025-05-17T00:21:49.479065524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 00:21:49.930278 containerd[1460]: time="2025-05-17T00:21:49.930218563Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:49.931369 containerd[1460]: time="2025-05-17T00:21:49.931336307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:49.931450 containerd[1460]: time="2025-05-17T00:21:49.931370482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 17 00:21:49.931660 kubelet[2491]: E0517 00:21:49.931602 2491 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:21:49.931787 kubelet[2491]: E0517 00:21:49.931672 2491 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 00:21:49.931866 kubelet[2491]: E0517 00:21:49.931821 2491 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5w59d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54654dbf54-bv8kl_calico-system(2d7523c0-3762-409a-b3ff-19b0db89e578): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:49.933137 kubelet[2491]: E0517 00:21:49.933085 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-54654dbf54-bv8kl" podUID="2d7523c0-3762-409a-b3ff-19b0db89e578" May 17 00:21:53.725860 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:51366.service - OpenSSH per-connection server daemon (10.0.0.1:51366). May 17 00:21:53.763285 sshd[5912]: Accepted publickey for core from 10.0.0.1 port 51366 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:53.764871 sshd[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:53.768684 systemd-logind[1446]: New session 23 of user core. May 17 00:21:53.777888 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:21:53.881138 sshd[5912]: pam_unix(sshd:session): session closed for user core May 17 00:21:53.885443 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:51366.service: Deactivated successfully. May 17 00:21:53.887604 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:21:53.888237 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. May 17 00:21:53.889151 systemd-logind[1446]: Removed session 23. May 17 00:21:54.233874 kubelet[2491]: E0517 00:21:54.233834 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:56.234889 containerd[1460]: time="2025-05-17T00:21:56.234831351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 00:21:56.466223 containerd[1460]: time="2025-05-17T00:21:56.466172209Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 17 00:21:56.467597 containerd[1460]: time="2025-05-17T00:21:56.467408713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 17 00:21:56.467597 containerd[1460]: time="2025-05-17T00:21:56.467458118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 17 00:21:56.467749 kubelet[2491]: E0517 00:21:56.467699 2491 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:21:56.468136 kubelet[2491]: E0517 00:21:56.467760 2491 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 00:21:56.468136 kubelet[2491]: E0517 00:21:56.467899 2491 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66zkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-qqvpp_calico-system(cb836777-e67d-4d21-a5e7-16ba9fc2ef39): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 17 00:21:56.469138 kubelet[2491]: E0517 00:21:56.469081 2491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-qqvpp" podUID="cb836777-e67d-4d21-a5e7-16ba9fc2ef39" May 17 00:21:57.234414 kubelet[2491]: E0517 00:21:57.234364 2491 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 17 00:21:58.899137 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:40874.service - OpenSSH per-connection server daemon (10.0.0.1:40874). May 17 00:21:58.936950 sshd[5949]: Accepted publickey for core from 10.0.0.1 port 40874 ssh2: RSA SHA256:c3VV2VNpTq6yK4xIFAKH91htkzN8ZWEjxH2QxISCLFU May 17 00:21:58.938686 sshd[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:21:58.943294 systemd-logind[1446]: New session 24 of user core. May 17 00:21:58.953013 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:21:59.069685 sshd[5949]: pam_unix(sshd:session): session closed for user core May 17 00:21:59.073694 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:40874.service: Deactivated successfully. May 17 00:21:59.075643 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:21:59.076276 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. May 17 00:21:59.077154 systemd-logind[1446]: Removed session 24.