Jul 11 00:27:54.026723 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:46:23 -00 2025 Jul 11 00:27:54.026752 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:27:54.026767 kernel: BIOS-provided physical RAM map: Jul 11 00:27:54.026775 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 11 00:27:54.026783 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 11 00:27:54.026792 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 11 00:27:54.026802 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 11 00:27:54.026810 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 11 00:27:54.026819 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 11 00:27:54.026827 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 11 00:27:54.026838 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 11 00:27:54.026847 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 11 00:27:54.026855 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 11 00:27:54.026864 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 11 00:27:54.026875 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 11 00:27:54.026884 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 11 00:27:54.026896 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 11 00:27:54.026905 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 11 00:27:54.026914 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 11 00:27:54.026923 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 00:27:54.026932 kernel: NX (Execute Disable) protection: active Jul 11 00:27:54.026941 kernel: APIC: Static calls initialized Jul 11 00:27:54.026951 kernel: efi: EFI v2.7 by EDK II Jul 11 00:27:54.026965 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jul 11 00:27:54.026975 kernel: SMBIOS 2.8 present. Jul 11 00:27:54.026983 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 11 00:27:54.026992 kernel: Hypervisor detected: KVM Jul 11 00:27:54.027011 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:27:54.027023 kernel: kvm-clock: using sched offset of 5282834120 cycles Jul 11 00:27:54.027033 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:27:54.027043 kernel: tsc: Detected 2794.746 MHz processor Jul 11 00:27:54.027052 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:27:54.027062 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:27:54.027072 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 11 00:27:54.027092 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 11 00:27:54.027101 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:27:54.027114 kernel: Using GB pages for direct mapping Jul 11 00:27:54.027124 kernel: Secure boot disabled Jul 11 00:27:54.027133 kernel: ACPI: Early table checksum verification disabled Jul 11 00:27:54.027143 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 11 00:27:54.027157 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:27:54.027168 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:27:54.027177 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:27:54.027190 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 11 00:27:54.027200 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:27:54.027210 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:27:54.027220 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:27:54.027230 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:27:54.027240 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 11 00:27:54.027252 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 11 00:27:54.027265 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 11 00:27:54.027275 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 11 00:27:54.027285 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 11 00:27:54.027294 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 11 00:27:54.027304 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 11 00:27:54.027314 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 11 00:27:54.027324 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 11 00:27:54.027333 kernel: No NUMA configuration found Jul 11 00:27:54.027343 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 11 00:27:54.027356 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 11 00:27:54.027366 kernel: Zone ranges: Jul 11 00:27:54.027376 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:27:54.027385 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 11 00:27:54.027395 kernel: Normal empty Jul 11 00:27:54.027405 kernel: Movable zone start for each node Jul 11 00:27:54.027415 kernel: Early memory node ranges Jul 11 00:27:54.027425 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 11 00:27:54.027434 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 11 00:27:54.027444 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 11 00:27:54.027457 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 11 00:27:54.027466 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 11 00:27:54.027476 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 11 00:27:54.027486 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 11 00:27:54.027496 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:27:54.027506 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 11 00:27:54.027515 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 11 00:27:54.027525 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:27:54.027535 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 11 00:27:54.027548 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 11 00:27:54.027558 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 11 00:27:54.027568 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:27:54.027578 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:27:54.027588 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:27:54.027597 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:27:54.027607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:27:54.027617 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:27:54.027627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:27:54.027640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:27:54.027649 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:27:54.027659 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:27:54.027669 kernel: TSC deadline timer available Jul 11 00:27:54.027694 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 11 00:27:54.027704 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:27:54.027714 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:27:54.027724 kernel: kvm-guest: setup PV sched yield Jul 11 00:27:54.027733 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 11 00:27:54.027747 kernel: Booting paravirtualized kernel on KVM Jul 11 00:27:54.027757 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:27:54.027767 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:27:54.027777 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 11 00:27:54.027787 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 11 00:27:54.027796 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:27:54.027806 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:27:54.027816 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:27:54.027827 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:27:54.027841 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:27:54.027851 kernel: random: crng init done Jul 11 00:27:54.027861 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:27:54.027871 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:27:54.027880 kernel: Fallback order for Node 0: 0 Jul 11 00:27:54.027890 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 11 00:27:54.027900 kernel: Policy zone: DMA32 Jul 11 00:27:54.027909 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:27:54.027920 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22744K rodata, 42872K init, 2320K bss, 166140K reserved, 0K cma-reserved) Jul 11 00:27:54.027933 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:27:54.027943 kernel: ftrace: allocating 37966 entries in 149 pages Jul 11 00:27:54.027952 kernel: ftrace: allocated 149 pages with 4 groups Jul 11 00:27:54.027962 kernel: Dynamic Preempt: voluntary Jul 11 00:27:54.027982 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:27:54.027996 kernel: rcu: RCU event tracing is enabled. Jul 11 00:27:54.028007 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:27:54.028017 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:27:54.028027 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:27:54.028038 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:27:54.028048 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:27:54.028062 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:27:54.028072 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:27:54.028092 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:27:54.028103 kernel: Console: colour dummy device 80x25 Jul 11 00:27:54.028113 kernel: printk: console [ttyS0] enabled Jul 11 00:27:54.028126 kernel: ACPI: Core revision 20230628 Jul 11 00:27:54.028137 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:27:54.028147 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:27:54.028158 kernel: x2apic enabled Jul 11 00:27:54.028168 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:27:54.028178 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:27:54.028189 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:27:54.028199 kernel: kvm-guest: setup PV IPIs Jul 11 00:27:54.028209 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:27:54.028223 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 11 00:27:54.028233 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 11 00:27:54.028244 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:27:54.028254 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:27:54.028265 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:27:54.028274 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:27:54.028285 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:27:54.028295 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:27:54.028305 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:27:54.028319 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:27:54.028329 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:27:54.028340 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:27:54.028350 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:27:54.028361 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:27:54.028371 kernel: x86/bugs: return thunk changed Jul 11 00:27:54.028381 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:27:54.028392 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:27:54.028405 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:27:54.028416 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:27:54.028426 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:27:54.028436 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:27:54.028447 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:27:54.028457 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:27:54.028467 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:27:54.028478 kernel: landlock: Up and running. Jul 11 00:27:54.028488 kernel: SELinux: Initializing. Jul 11 00:27:54.028502 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:27:54.028512 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:27:54.028523 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:27:54.028533 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:27:54.028544 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:27:54.028554 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:27:54.028565 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:27:54.028575 kernel: ... version: 0 Jul 11 00:27:54.028585 kernel: ... bit width: 48 Jul 11 00:27:54.028598 kernel: ... generic registers: 6 Jul 11 00:27:54.028609 kernel: ... value mask: 0000ffffffffffff Jul 11 00:27:54.028619 kernel: ... max period: 00007fffffffffff Jul 11 00:27:54.028629 kernel: ... fixed-purpose events: 0 Jul 11 00:27:54.028639 kernel: ... event mask: 000000000000003f Jul 11 00:27:54.028650 kernel: signal: max sigframe size: 1776 Jul 11 00:27:54.028660 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:27:54.028693 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:27:54.028703 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:27:54.028717 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:27:54.028727 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:27:54.028737 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:27:54.028748 kernel: smpboot: Max logical packages: 1 Jul 11 00:27:54.028758 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 11 00:27:54.028768 kernel: devtmpfs: initialized Jul 11 00:27:54.028779 kernel: x86/mm: Memory block size: 128MB Jul 11 00:27:54.028794 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 11 00:27:54.028804 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 11 00:27:54.028815 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 11 00:27:54.028829 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 11 00:27:54.028839 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 11 00:27:54.028850 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:27:54.028860 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:27:54.028870 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:27:54.028881 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:27:54.028891 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:27:54.028902 kernel: audit: type=2000 audit(1752193672.692:1): state=initialized audit_enabled=0 res=1 Jul 11 00:27:54.028915 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:27:54.028925 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:27:54.028936 kernel: cpuidle: using governor menu Jul 11 00:27:54.028946 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:27:54.028956 kernel: dca service started, version 1.12.1 Jul 11 00:27:54.028967 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 11 00:27:54.028977 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 00:27:54.028988 kernel: PCI: Using configuration type 1 for base access Jul 11 00:27:54.028998 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:27:54.029012 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:27:54.029022 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:27:54.029032 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:27:54.029043 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:27:54.029053 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:27:54.029063 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:27:54.029073 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:27:54.029094 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:27:54.029104 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 11 00:27:54.029117 kernel: ACPI: Interpreter enabled Jul 11 00:27:54.029127 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:27:54.029138 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:27:54.029148 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:27:54.029158 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:27:54.029169 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:27:54.029179 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:27:54.029424 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:27:54.029592 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:27:54.029765 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:27:54.029779 kernel: PCI host bridge to bus 0000:00 Jul 11 00:27:54.029926 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:27:54.030097 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:27:54.030239 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:27:54.030384 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 00:27:54.030524 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:27:54.030658 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 11 00:27:54.030836 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:27:54.031035 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 11 00:27:54.031234 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 11 00:27:54.031388 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 11 00:27:54.031600 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 11 00:27:54.031779 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 11 00:27:54.031924 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 11 00:27:54.032072 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:27:54.032258 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:27:54.032417 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 11 00:27:54.032566 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 11 00:27:54.032740 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 11 00:27:54.032904 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 11 00:27:54.033054 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 11 00:27:54.033326 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 11 00:27:54.033475 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 11 00:27:54.033633 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 11 00:27:54.033797 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 11 00:27:54.034110 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 11 00:27:54.034268 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 11 00:27:54.034418 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 11 00:27:54.034576 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 11 00:27:54.034788 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:27:54.034955 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 11 00:27:54.035117 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 11 00:27:54.035267 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 11 00:27:54.035432 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 11 00:27:54.035578 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 11 00:27:54.035593 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:27:54.035603 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:27:54.035614 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:27:54.035625 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:27:54.035640 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:27:54.035651 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:27:54.035662 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:27:54.035687 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:27:54.035698 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:27:54.035709 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:27:54.035719 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:27:54.035730 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:27:54.035741 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:27:54.035755 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:27:54.035766 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:27:54.035776 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:27:54.035787 kernel: iommu: Default domain type: Translated Jul 11 00:27:54.035798 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:27:54.035808 kernel: efivars: Registered efivars operations Jul 11 00:27:54.035819 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:27:54.035829 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:27:54.035840 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 11 00:27:54.035850 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 11 00:27:54.035864 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 11 00:27:54.035874 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 11 00:27:54.036022 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:27:54.036207 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:27:54.036378 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:27:54.036409 kernel: vgaarb: loaded Jul 11 00:27:54.036435 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:27:54.036446 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:27:54.036462 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:27:54.036472 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:27:54.036483 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:27:54.036494 kernel: pnp: PnP ACPI init Jul 11 00:27:54.036662 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 00:27:54.036709 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:27:54.036720 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:27:54.036731 kernel: NET: Registered PF_INET protocol family Jul 11 00:27:54.036746 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:27:54.036757 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:27:54.036768 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:27:54.036778 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:27:54.036789 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:27:54.036799 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:27:54.036810 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:27:54.036820 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:27:54.036831 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:27:54.036845 kernel: NET: Registered PF_XDP protocol family Jul 11 00:27:54.036992 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 11 00:27:54.037149 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 11 00:27:54.037287 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:27:54.037747 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:27:54.037880 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:27:54.038036 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 00:27:54.038194 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 00:27:54.038340 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 11 00:27:54.038357 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:27:54.038369 kernel: Initialise system trusted keyrings Jul 11 00:27:54.038382 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:27:54.038392 kernel: Key type asymmetric registered Jul 11 00:27:54.038403 kernel: Asymmetric key parser 'x509' registered Jul 11 00:27:54.038414 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 11 00:27:54.038424 kernel: io scheduler mq-deadline registered Jul 11 00:27:54.038435 kernel: io scheduler kyber registered Jul 11 00:27:54.038450 kernel: io scheduler bfq registered Jul 11 00:27:54.038461 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:27:54.038472 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:27:54.038483 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:27:54.038494 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:27:54.038505 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:27:54.038516 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:27:54.038527 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:27:54.038537 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:27:54.038551 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:27:54.038562 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:27:54.038741 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:27:54.038883 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:27:54.039021 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:27:53 UTC (1752193673) Jul 11 00:27:54.039198 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 00:27:54.039212 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:27:54.039223 kernel: efifb: probing for efifb Jul 11 00:27:54.039239 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 11 00:27:54.039249 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 11 00:27:54.039260 kernel: efifb: scrolling: redraw Jul 11 00:27:54.039270 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 11 00:27:54.039283 kernel: Console: switching to colour frame buffer device 100x37 Jul 11 00:27:54.039294 kernel: fb0: EFI VGA frame buffer device Jul 11 00:27:54.039328 kernel: pstore: Using crash dump compression: deflate Jul 11 00:27:54.039341 kernel: pstore: Registered efi_pstore as persistent store backend Jul 11 00:27:54.039353 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:27:54.039367 kernel: Segment Routing with IPv6 Jul 11 00:27:54.039377 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:27:54.039388 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:27:54.039398 kernel: Key type dns_resolver registered Jul 11 00:27:54.039409 kernel: IPI shorthand broadcast: enabled Jul 11 00:27:54.039420 kernel: sched_clock: Marking stable (800003081, 135377400)->(965224761, -29844280) Jul 11 00:27:54.039431 kernel: registered taskstats version 1 Jul 11 00:27:54.039442 kernel: Loading compiled-in X.509 certificates Jul 11 00:27:54.039453 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 5956f0842928c96096c398e9db55919cd236a39f' Jul 11 00:27:54.039467 kernel: Key type .fscrypt registered Jul 11 00:27:54.039479 kernel: Key type fscrypt-provisioning registered Jul 11 00:27:54.039490 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:27:54.039501 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:27:54.039512 kernel: ima: No architecture policies found Jul 11 00:27:54.039523 kernel: clk: Disabling unused clocks Jul 11 00:27:54.039534 kernel: Freeing unused kernel image (initmem) memory: 42872K Jul 11 00:27:54.039545 kernel: Write protecting the kernel read-only data: 36864k Jul 11 00:27:54.039557 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Jul 11 00:27:54.039571 kernel: Run /init as init process Jul 11 00:27:54.039582 kernel: with arguments: Jul 11 00:27:54.039593 kernel: /init Jul 11 00:27:54.039604 kernel: with environment: Jul 11 00:27:54.039614 kernel: HOME=/ Jul 11 00:27:54.039625 kernel: TERM=linux Jul 11 00:27:54.039636 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:27:54.039650 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:27:54.039667 systemd[1]: Detected virtualization kvm. Jul 11 00:27:54.039766 systemd[1]: Detected architecture x86-64. Jul 11 00:27:54.039777 systemd[1]: Running in initrd. Jul 11 00:27:54.039789 systemd[1]: No hostname configured, using default hostname. Jul 11 00:27:54.039800 systemd[1]: Hostname set to . Jul 11 00:27:54.039816 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:27:54.039828 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:27:54.039840 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:27:54.039851 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:27:54.039864 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:27:54.039876 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:27:54.039888 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:27:54.039903 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:27:54.039917 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:27:54.039929 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:27:54.039941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:27:54.039952 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:27:54.039971 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:27:54.039982 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:27:54.039999 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:27:54.040019 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:27:54.040031 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:27:54.040043 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:27:54.040054 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:27:54.040066 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:27:54.040089 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:27:54.040101 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:27:54.040112 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:27:54.040129 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:27:54.040141 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:27:54.040153 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:27:54.040164 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:27:54.040176 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:27:54.040188 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:27:54.040199 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:27:54.040211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:27:54.040226 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:27:54.040237 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:27:54.040280 systemd-journald[192]: Collecting audit messages is disabled. Jul 11 00:27:54.040311 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:27:54.040327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:27:54.040340 systemd-journald[192]: Journal started Jul 11 00:27:54.040365 systemd-journald[192]: Runtime Journal (/run/log/journal/374796f6d4ff4e20a338dcab938dc008) is 6.0M, max 48.3M, 42.2M free. Jul 11 00:27:54.034938 systemd-modules-load[193]: Inserted module 'overlay' Jul 11 00:27:54.048041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:27:54.048205 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:27:54.052036 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:27:54.078721 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:27:54.082755 kernel: Bridge firewalling registered Jul 11 00:27:54.082661 systemd-modules-load[193]: Inserted module 'br_netfilter' Jul 11 00:27:54.088572 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:27:54.091013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:27:54.095913 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:27:54.096983 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:27:54.104518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:27:54.117066 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:27:54.122029 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:27:54.128692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:27:54.146106 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:27:54.147441 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:27:54.152767 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:27:54.171568 dracut-cmdline[229]: dracut-dracut-053 Jul 11 00:27:54.178640 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:27:54.214342 systemd-resolved[234]: Positive Trust Anchors: Jul 11 00:27:54.214365 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:27:54.214402 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:27:54.217960 systemd-resolved[234]: Defaulting to hostname 'linux'. Jul 11 00:27:54.219864 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:27:54.227770 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:27:54.328714 kernel: SCSI subsystem initialized Jul 11 00:27:54.342376 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:27:54.359713 kernel: iscsi: registered transport (tcp) Jul 11 00:27:54.384804 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:27:54.384880 kernel: QLogic iSCSI HBA Driver Jul 11 00:27:54.458487 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:27:54.470968 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:27:54.513728 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:27:54.514709 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:27:54.514730 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:27:54.589732 kernel: raid6: avx2x4 gen() 22615 MB/s Jul 11 00:27:54.606724 kernel: raid6: avx2x2 gen() 21660 MB/s Jul 11 00:27:54.625607 kernel: raid6: avx2x1 gen() 16946 MB/s Jul 11 00:27:54.625813 kernel: raid6: using algorithm avx2x4 gen() 22615 MB/s Jul 11 00:27:54.643121 kernel: raid6: .... xor() 6288 MB/s, rmw enabled Jul 11 00:27:54.643214 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:27:54.673340 kernel: xor: automatically using best checksumming function avx Jul 11 00:27:54.898777 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:27:54.919072 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:27:54.932939 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:27:54.953989 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jul 11 00:27:54.960425 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:27:54.968954 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:27:54.995464 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jul 11 00:27:55.059252 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:27:55.079985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:27:55.164125 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:27:55.183858 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:27:55.194830 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:27:55.200484 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:27:55.201059 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:27:55.204411 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:27:55.207613 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:27:55.241985 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:27:55.244215 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:27:55.244261 kernel: GPT:9289727 != 19775487 Jul 11 00:27:55.244275 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:27:55.244288 kernel: GPT:9289727 != 19775487 Jul 11 00:27:55.245268 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:27:55.245294 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:27:55.249898 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:27:55.258735 kernel: libata version 3.00 loaded. Jul 11 00:27:55.266783 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:27:55.268697 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:27:55.268907 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:27:55.277653 kernel: AVX2 version of gcm_enc/dec engaged. Jul 11 00:27:55.277719 kernel: AES CTR mode by8 optimization enabled Jul 11 00:27:55.277733 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 11 00:27:55.278004 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:27:55.284114 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:27:55.291471 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:27:55.333051 kernel: scsi host0: ahci Jul 11 00:27:55.334282 kernel: scsi host1: ahci Jul 11 00:27:55.334557 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:27:55.338069 kernel: scsi host2: ahci Jul 11 00:27:55.338271 kernel: scsi host3: ahci Jul 11 00:27:55.341700 kernel: scsi host4: ahci Jul 11 00:27:55.341769 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:27:55.342055 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:27:55.343615 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:27:55.360267 kernel: scsi host5: ahci Jul 11 00:27:55.360518 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 11 00:27:55.360543 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Jul 11 00:27:55.360557 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 11 00:27:55.360570 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 11 00:27:55.360583 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 11 00:27:55.360596 kernel: BTRFS: device fsid 54fb9359-b495-4b0c-b313-b0e2955e4a38 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (471) Jul 11 00:27:55.360609 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 11 00:27:55.360622 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 11 00:27:55.371183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:27:55.374491 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:27:55.394092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:27:55.402515 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:27:55.411745 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:27:55.443669 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:27:55.452974 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:27:55.457241 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:27:55.471993 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:27:55.474514 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:27:55.495282 disk-uuid[557]: Primary Header is updated. Jul 11 00:27:55.495282 disk-uuid[557]: Secondary Entries is updated. Jul 11 00:27:55.495282 disk-uuid[557]: Secondary Header is updated. Jul 11 00:27:55.495272 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:27:55.501987 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:27:55.504704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:27:55.509708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:27:55.666726 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:27:55.666806 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:27:55.669693 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:27:55.669720 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:27:55.669737 kernel: ata3.00: applying bridge limits Jul 11 00:27:55.669774 kernel: ata3.00: configured for UDMA/100 Jul 11 00:27:55.670718 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:27:55.671710 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:27:55.672713 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:27:55.674713 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:27:55.727720 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:27:55.728836 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:27:55.746251 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:27:56.510785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:27:56.512030 disk-uuid[566]: The operation has completed successfully. Jul 11 00:27:56.544413 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:27:56.544569 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:27:56.574449 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:27:56.581947 sh[596]: Success Jul 11 00:27:56.597727 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 11 00:27:56.640089 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:27:56.654648 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:27:56.657945 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:27:56.682945 kernel: BTRFS info (device dm-0): first mount of filesystem 54fb9359-b495-4b0c-b313-b0e2955e4a38 Jul 11 00:27:56.683018 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:27:56.683075 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:27:56.684072 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:27:56.684829 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:27:56.691379 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:27:56.692425 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:27:56.706915 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:27:56.710453 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:27:56.721787 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:27:56.721829 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:27:56.721840 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:27:56.726731 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:27:56.737067 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:27:56.767716 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:27:56.875248 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:27:56.888252 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:27:56.913545 systemd-networkd[775]: lo: Link UP Jul 11 00:27:56.913560 systemd-networkd[775]: lo: Gained carrier Jul 11 00:27:56.915367 systemd-networkd[775]: Enumeration completed Jul 11 00:27:56.915511 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:27:56.915836 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:27:56.915841 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:27:56.917015 systemd-networkd[775]: eth0: Link UP Jul 11 00:27:56.917019 systemd-networkd[775]: eth0: Gained carrier Jul 11 00:27:56.917026 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:27:56.918865 systemd[1]: Reached target network.target - Network. Jul 11 00:27:56.941803 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:27:57.129586 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:27:57.136103 systemd-resolved[234]: Detected conflict on linux IN A 10.0.0.133 Jul 11 00:27:57.136126 systemd-resolved[234]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jul 11 00:27:57.147107 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:27:57.291943 ignition[780]: Ignition 2.19.0 Jul 11 00:27:57.291964 ignition[780]: Stage: fetch-offline Jul 11 00:27:57.292069 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:27:57.292087 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:27:57.292218 ignition[780]: parsed url from cmdline: "" Jul 11 00:27:57.292223 ignition[780]: no config URL provided Jul 11 00:27:57.292230 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:27:57.292241 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:27:57.292277 ignition[780]: op(1): [started] loading QEMU firmware config module Jul 11 00:27:57.292287 ignition[780]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:27:57.301833 ignition[780]: op(1): [finished] loading QEMU firmware config module Jul 11 00:27:57.344181 ignition[780]: parsing config with SHA512: 9a351bbf856dbc792490fe053a387b608682e5ec84b273961e9440c93019ade4ecf51ed16245c9c1444aebeb173e0239e2b12e31ad6c9a48617b2a32b871f6c5 Jul 11 00:27:57.350563 unknown[780]: fetched base config from "system" Jul 11 00:27:57.351048 unknown[780]: fetched user config from "qemu" Jul 11 00:27:57.351619 ignition[780]: fetch-offline: fetch-offline passed Jul 11 00:27:57.351724 ignition[780]: Ignition finished successfully Jul 11 00:27:57.354049 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:27:57.356561 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:27:57.372018 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:27:57.411506 ignition[790]: Ignition 2.19.0 Jul 11 00:27:57.411516 ignition[790]: Stage: kargs Jul 11 00:27:57.411700 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:27:57.415352 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:27:57.411712 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:27:57.412499 ignition[790]: kargs: kargs passed Jul 11 00:27:57.412539 ignition[790]: Ignition finished successfully Jul 11 00:27:57.427933 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:27:57.445411 ignition[799]: Ignition 2.19.0 Jul 11 00:27:57.445423 ignition[799]: Stage: disks Jul 11 00:27:57.448865 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:27:57.445592 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:27:57.449634 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:27:57.445603 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:27:57.449898 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:27:57.446480 ignition[799]: disks: disks passed Jul 11 00:27:57.450233 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:27:57.446528 ignition[799]: Ignition finished successfully Jul 11 00:27:57.450562 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:27:57.450885 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:27:57.464056 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:27:57.483753 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:27:57.502778 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:27:57.518718 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:27:57.666733 kernel: EXT4-fs (vda9): mounted filesystem 66ba5133-8c5a-461b-b2c1-a823c72af79b r/w with ordered data mode. Quota mode: none. Jul 11 00:27:57.669189 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:27:57.671719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:27:57.684237 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:27:57.688234 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:27:57.691254 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:27:57.691325 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:27:57.693771 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:27:57.701052 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Jul 11 00:27:57.703110 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:27:57.706819 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:27:57.706858 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:27:57.706872 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:27:57.709712 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:27:57.713036 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:27:57.717391 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:27:57.763892 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:27:57.770751 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:27:57.777689 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:27:57.784269 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:27:57.900391 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:27:57.913909 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:27:57.917942 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:27:57.925728 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:27:57.925465 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:27:57.950151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:27:57.958481 ignition[930]: INFO : Ignition 2.19.0 Jul 11 00:27:57.958481 ignition[930]: INFO : Stage: mount Jul 11 00:27:57.960559 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:27:57.960559 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:27:57.960559 ignition[930]: INFO : mount: mount passed Jul 11 00:27:57.960559 ignition[930]: INFO : Ignition finished successfully Jul 11 00:27:57.962176 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:27:58.010055 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:27:58.019620 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:27:58.033251 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Jul 11 00:27:58.033305 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:27:58.033329 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:27:58.034994 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:27:58.038704 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:27:58.041342 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:27:58.147987 ignition[962]: INFO : Ignition 2.19.0 Jul 11 00:27:58.147987 ignition[962]: INFO : Stage: files Jul 11 00:27:58.150288 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:27:58.150288 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:27:58.150288 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:27:58.154577 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:27:58.154577 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:27:58.154577 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:27:58.154577 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:27:58.160798 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:27:58.160798 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:27:58.160798 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:27:58.160798 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 00:27:58.160798 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 11 00:27:58.155311 unknown[962]: wrote ssh authorized keys file for user: core Jul 11 00:27:58.197512 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:27:58.304995 systemd-networkd[775]: eth0: Gained IPv6LL Jul 11 00:27:58.396399 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:27:58.399163 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 11 00:27:59.024914 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 00:27:59.974397 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 11 00:27:59.974397 ignition[962]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 11 00:27:59.978987 ignition[962]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:28:00.051204 ignition[962]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:28:00.096823 ignition[962]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:28:00.096823 ignition[962]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:28:00.096823 ignition[962]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:28:00.096823 ignition[962]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:28:00.096823 ignition[962]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:28:00.096823 ignition[962]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:28:00.096823 ignition[962]: INFO : files: files passed Jul 11 00:28:00.096823 ignition[962]: INFO : Ignition finished successfully Jul 11 00:28:00.060469 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:28:00.121092 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:28:00.125041 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:28:00.127495 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:28:00.127658 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:28:00.142187 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:28:00.146251 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:28:00.146251 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:28:00.150745 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:28:00.153924 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:28:00.155757 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:28:00.169037 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:28:00.206015 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:28:00.206191 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:28:00.208932 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:28:00.211471 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:28:00.213863 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:28:00.227072 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:28:00.255547 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:28:00.269970 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:28:00.283442 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:28:00.285017 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:28:00.287417 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:28:00.289651 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:28:00.289851 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:28:00.292228 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:28:00.294601 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:28:00.296599 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:28:00.299071 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:28:00.301503 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:28:00.303054 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:28:00.305438 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:28:00.308027 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:28:00.310246 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:28:00.312365 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:28:00.314499 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:28:00.314706 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:28:00.317046 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:28:00.318970 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:28:00.321032 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:28:00.321210 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:28:00.323372 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:28:00.323533 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:28:00.326080 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:28:00.326227 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:28:00.328129 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:28:00.330163 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:28:00.333735 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:28:00.335986 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:28:00.338057 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:28:00.340326 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:28:00.340460 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:28:00.342316 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:28:00.342432 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:28:00.344544 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:28:00.344721 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:28:00.347451 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:28:00.347606 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:28:00.357891 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:28:00.360563 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:28:00.362084 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:28:00.362303 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:28:00.364783 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:28:00.364989 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:28:00.371769 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:28:00.371953 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:28:00.391824 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:28:00.415069 ignition[1017]: INFO : Ignition 2.19.0 Jul 11 00:28:00.415069 ignition[1017]: INFO : Stage: umount Jul 11 00:28:00.422018 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:28:00.422018 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:28:00.422018 ignition[1017]: INFO : umount: umount passed Jul 11 00:28:00.422018 ignition[1017]: INFO : Ignition finished successfully Jul 11 00:28:00.422606 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:28:00.422777 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:28:00.424963 systemd[1]: Stopped target network.target - Network. Jul 11 00:28:00.426585 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:28:00.426662 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:28:00.428601 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:28:00.428653 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:28:00.430982 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:28:00.431042 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:28:00.433275 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:28:00.433328 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:28:00.436014 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:28:00.438364 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:28:00.442735 systemd-networkd[775]: eth0: DHCPv6 lease lost Jul 11 00:28:00.472644 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:28:00.472826 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:28:00.475407 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:28:00.475545 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:28:00.479300 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:28:00.479368 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:28:00.489999 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:28:00.492330 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:28:00.492432 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:28:00.506834 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:28:00.506963 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:28:00.509479 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:28:00.509549 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:28:00.512349 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:28:00.512430 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:28:00.515112 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:28:00.543553 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:28:00.543804 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:28:00.547754 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:28:00.547842 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:28:00.549053 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:28:00.549103 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:28:00.551172 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:28:00.551235 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:28:00.553991 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:28:00.554056 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:28:00.555854 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:28:00.555932 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:28:00.568065 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:28:00.569300 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:28:00.569392 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:28:00.571868 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 00:28:00.571944 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:28:00.574253 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:28:00.574358 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:28:00.576878 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:28:00.576955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:28:00.621769 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:28:00.621939 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:28:00.624407 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:28:00.624568 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:28:00.803519 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:28:00.803703 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:28:00.805880 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:28:00.807657 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:28:00.807729 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:28:00.820965 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:28:00.830237 systemd[1]: Switching root. Jul 11 00:28:00.863570 systemd-journald[192]: Journal stopped Jul 11 00:28:02.375242 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jul 11 00:28:02.375331 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:28:02.375355 kernel: SELinux: policy capability open_perms=1 Jul 11 00:28:02.375370 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:28:02.375385 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:28:02.375400 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:28:02.375422 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:28:02.375444 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:28:02.375459 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:28:02.375475 kernel: audit: type=1403 audit(1752193681.378:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:28:02.375491 systemd[1]: Successfully loaded SELinux policy in 46.784ms. Jul 11 00:28:02.375525 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.991ms. Jul 11 00:28:02.375543 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:28:02.375567 systemd[1]: Detected virtualization kvm. Jul 11 00:28:02.375583 systemd[1]: Detected architecture x86-64. Jul 11 00:28:02.375599 systemd[1]: Detected first boot. Jul 11 00:28:02.375617 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:28:02.375633 zram_generator::config[1078]: No configuration found. Jul 11 00:28:02.375654 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:28:02.375690 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:28:02.375708 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:28:02.375725 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:28:02.375740 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:28:02.375756 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:28:02.375771 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:28:02.375790 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:28:02.375806 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:28:02.375822 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:28:02.375852 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:28:02.375868 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:28:02.375884 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:28:02.375899 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:28:02.375915 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:28:02.375930 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:28:02.375946 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:28:02.375961 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:28:02.375977 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:28:02.376000 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:28:02.376017 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:28:02.376033 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:28:02.376049 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:28:02.376064 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:28:02.376079 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:28:02.376094 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:28:02.376109 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:28:02.376128 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:28:02.376144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:28:02.376159 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:28:02.376174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:28:02.376190 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:28:02.376205 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:28:02.376221 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:28:02.376237 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:28:02.376252 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:28:02.376273 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:28:02.376295 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:28:02.376310 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:28:02.376326 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:28:02.376344 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:28:02.376359 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:28:02.376375 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:28:02.376391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:28:02.376407 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:28:02.376426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:28:02.376442 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:28:02.376458 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:28:02.376474 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:28:02.376490 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 11 00:28:02.376506 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 11 00:28:02.376521 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:28:02.376537 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:28:02.376556 kernel: loop: module loaded Jul 11 00:28:02.376572 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:28:02.376586 kernel: fuse: init (API version 7.39) Jul 11 00:28:02.376601 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:28:02.376617 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:28:02.376640 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:28:02.376656 kernel: ACPI: bus type drm_connector registered Jul 11 00:28:02.376672 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:28:02.378044 systemd-journald[1160]: Collecting audit messages is disabled. Jul 11 00:28:02.378076 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:28:02.378094 systemd-journald[1160]: Journal started Jul 11 00:28:02.378125 systemd-journald[1160]: Runtime Journal (/run/log/journal/374796f6d4ff4e20a338dcab938dc008) is 6.0M, max 48.3M, 42.2M free. Jul 11 00:28:02.383702 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:28:02.385568 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:28:02.386898 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:28:02.388411 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:28:02.389858 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:28:02.391445 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:28:02.393328 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:28:02.393619 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:28:02.395734 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:28:02.396045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:28:02.397827 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:28:02.398108 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:28:02.400008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:28:02.400299 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:28:02.402184 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:28:02.402455 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:28:02.404205 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:28:02.404595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:28:02.406605 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:28:02.408509 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:28:02.410612 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:28:02.413098 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:28:02.431313 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:28:02.440825 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:28:02.444223 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:28:02.445857 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:28:02.457998 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:28:02.463874 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:28:02.465444 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:28:02.470903 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:28:02.472303 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:28:02.475346 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:28:02.482424 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:28:02.487133 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:28:02.487554 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:28:02.491201 systemd-journald[1160]: Time spent on flushing to /var/log/journal/374796f6d4ff4e20a338dcab938dc008 is 23.405ms for 985 entries. Jul 11 00:28:02.491201 systemd-journald[1160]: System Journal (/var/log/journal/374796f6d4ff4e20a338dcab938dc008) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:28:02.628211 systemd-journald[1160]: Received client request to flush runtime journal. Jul 11 00:28:02.495212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:28:02.507102 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:28:02.520514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:28:02.525146 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:28:02.531890 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 11 00:28:02.531904 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 11 00:28:02.538719 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:28:02.570205 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:28:02.575501 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:28:02.577903 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:28:02.618654 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:28:02.630119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:28:02.632587 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:28:02.649654 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 11 00:28:02.649695 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jul 11 00:28:02.658061 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:28:03.268496 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:28:03.282954 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:28:03.316485 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Jul 11 00:28:03.347039 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:28:03.358087 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:28:03.379095 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:28:03.405476 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 11 00:28:03.408137 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1263) Jul 11 00:28:03.466279 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:28:03.518711 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 11 00:28:03.529150 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:28:03.526905 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:28:03.546928 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 11 00:28:03.547393 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:28:03.547630 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 11 00:28:03.548107 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:28:03.618783 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 11 00:28:03.629163 systemd-networkd[1247]: lo: Link UP Jul 11 00:28:03.629391 systemd-networkd[1247]: lo: Gained carrier Jul 11 00:28:03.632201 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:28:03.634235 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:28:03.633137 systemd-networkd[1247]: Enumeration completed Jul 11 00:28:03.633622 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:28:03.633627 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:28:03.636167 systemd-networkd[1247]: eth0: Link UP Jul 11 00:28:03.636229 systemd-networkd[1247]: eth0: Gained carrier Jul 11 00:28:03.636301 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:28:03.636872 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:28:03.655210 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:28:03.667988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:28:03.668464 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:28:03.686943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:28:03.705785 systemd-networkd[1247]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:28:03.736172 kernel: kvm_amd: TSC scaling supported Jul 11 00:28:03.736254 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:28:03.736299 kernel: kvm_amd: Nested Paging enabled Jul 11 00:28:03.737105 kernel: kvm_amd: LBR virtualization supported Jul 11 00:28:03.737156 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:28:03.738088 kernel: kvm_amd: Virtual GIF supported Jul 11 00:28:03.770790 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:28:03.798849 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:28:03.812911 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:28:03.815848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:28:03.826472 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:28:03.875323 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:28:03.909177 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:28:03.920040 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:28:03.927349 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:28:03.968200 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:28:03.970349 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:28:03.971832 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:28:03.971864 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:28:03.994043 systemd[1]: Reached target machines.target - Containers. Jul 11 00:28:03.996788 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:28:04.012140 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:28:04.127097 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:28:04.198960 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:28:04.203518 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:28:04.219463 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:28:04.260400 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:28:04.275164 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:28:04.297005 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:28:04.321184 kernel: loop0: detected capacity change from 0 to 142488 Jul 11 00:28:04.619233 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:28:04.626562 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:28:04.647865 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:28:04.704436 kernel: loop1: detected capacity change from 0 to 221472 Jul 11 00:28:04.763039 kernel: loop2: detected capacity change from 0 to 140768 Jul 11 00:28:04.887451 kernel: loop3: detected capacity change from 0 to 142488 Jul 11 00:28:04.936987 kernel: loop4: detected capacity change from 0 to 221472 Jul 11 00:28:04.962836 systemd-networkd[1247]: eth0: Gained IPv6LL Jul 11 00:28:04.981380 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:28:05.004154 kernel: loop5: detected capacity change from 0 to 140768 Jul 11 00:28:05.083044 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:28:05.087826 (sd-merge)[1317]: Merged extensions into '/usr'. Jul 11 00:28:05.096973 systemd[1]: Reloading requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:28:05.096994 systemd[1]: Reloading... Jul 11 00:28:05.200708 zram_generator::config[1344]: No configuration found. Jul 11 00:28:05.558086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:28:05.651949 ldconfig[1302]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:28:05.701526 systemd[1]: Reloading finished in 603 ms. Jul 11 00:28:05.737832 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:28:05.745094 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:28:05.774087 systemd[1]: Starting ensure-sysext.service... Jul 11 00:28:05.782994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:28:05.818212 systemd[1]: Reloading requested from client PID 1390 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:28:05.819804 systemd[1]: Reloading... Jul 11 00:28:05.874408 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:28:05.875985 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:28:05.877351 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:28:05.880272 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Jul 11 00:28:05.881031 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Jul 11 00:28:05.891085 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:28:05.891108 systemd-tmpfiles[1391]: Skipping /boot Jul 11 00:28:05.911664 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:28:05.911702 systemd-tmpfiles[1391]: Skipping /boot Jul 11 00:28:05.986772 zram_generator::config[1418]: No configuration found. Jul 11 00:28:06.173460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:28:06.281648 systemd[1]: Reloading finished in 460 ms. Jul 11 00:28:06.304788 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:28:06.341056 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:28:06.367051 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:28:06.390177 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:28:06.409045 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:28:06.417966 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:28:06.428170 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:28:06.428768 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:28:06.439465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:28:06.445097 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:28:06.451067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:28:06.454104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:28:06.457029 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:28:06.458506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:28:06.458960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:28:06.470438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:28:06.483379 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:28:06.489090 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:28:06.489435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:28:06.502005 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:28:06.504470 augenrules[1490]: No rules Jul 11 00:28:06.507520 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:28:06.526504 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:28:06.547228 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:28:06.547533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:28:06.569119 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:28:06.588044 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:28:06.592627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:28:06.604489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:28:06.615091 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:28:06.618133 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:28:06.620789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:28:06.622850 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:28:06.623144 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:28:06.627471 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:28:06.628640 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:28:06.636948 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:28:06.637219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:28:06.639426 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:28:06.639696 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:28:06.649449 systemd[1]: Finished ensure-sysext.service. Jul 11 00:28:06.659593 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:28:06.662067 systemd-resolved[1473]: Positive Trust Anchors: Jul 11 00:28:06.662089 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:28:06.662127 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:28:06.668024 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:28:06.668150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:28:06.668618 systemd-resolved[1473]: Defaulting to hostname 'linux'. Jul 11 00:28:06.678012 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:28:06.681122 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:28:06.684701 systemd[1]: Reached target network.target - Network. Jul 11 00:28:06.686347 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:28:06.688988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:28:06.710660 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:28:06.712597 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:28:06.804099 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:28:06.809535 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:28:06.811107 systemd-timesyncd[1523]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:28:06.811178 systemd-timesyncd[1523]: Initial clock synchronization to Fri 2025-07-11 00:28:06.808816 UTC. Jul 11 00:28:06.817791 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:28:06.821988 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:28:06.831053 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:28:06.832700 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:28:06.832751 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:28:06.836168 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:28:06.837906 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:28:06.845085 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:28:06.846610 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:28:06.858136 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:28:06.869414 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:28:06.872873 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:28:06.881879 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:28:06.892271 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:28:06.893506 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:28:06.900093 systemd[1]: System is tainted: cgroupsv1 Jul 11 00:28:06.900185 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:28:06.900219 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:28:06.903453 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:28:06.913012 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:28:06.917441 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:28:06.922918 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:28:06.935764 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:28:06.937474 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:28:06.950884 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:28:06.957113 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:28:06.965733 jq[1533]: false Jul 11 00:28:06.976630 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:28:06.983805 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:28:07.000698 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:28:07.023396 extend-filesystems[1534]: Found loop3 Jul 11 00:28:07.037391 extend-filesystems[1534]: Found loop4 Jul 11 00:28:07.037391 extend-filesystems[1534]: Found loop5 Jul 11 00:28:07.037391 extend-filesystems[1534]: Found sr0 Jul 11 00:28:07.037391 extend-filesystems[1534]: Found vda Jul 11 00:28:07.037391 extend-filesystems[1534]: Found vda1 Jul 11 00:28:07.037391 extend-filesystems[1534]: Found vda2 Jul 11 00:28:07.037391 extend-filesystems[1534]: Found vda3 Jul 11 00:28:07.037391 extend-filesystems[1534]: Found usr Jul 11 00:28:07.037391 extend-filesystems[1534]: Found vda4 Jul 11 00:28:07.037391 extend-filesystems[1534]: Found vda6 Jul 11 00:28:07.037391 extend-filesystems[1534]: Found vda7 Jul 11 00:28:07.037391 extend-filesystems[1534]: Found vda9 Jul 11 00:28:07.037391 extend-filesystems[1534]: Checking size of /dev/vda9 Jul 11 00:28:07.030964 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:28:07.053054 dbus-daemon[1531]: [system] SELinux support is enabled Jul 11 00:28:07.092179 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:28:07.092214 extend-filesystems[1534]: Resized partition /dev/vda9 Jul 11 00:28:07.068738 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:28:07.099305 extend-filesystems[1566]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:28:07.070626 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:28:07.111086 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:28:07.130040 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:28:07.133067 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:28:07.152238 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:28:07.152653 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:28:07.159391 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:28:07.163267 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:28:07.169749 update_engine[1568]: I20250711 00:28:07.163465 1568 main.cc:92] Flatcar Update Engine starting Jul 11 00:28:07.169749 update_engine[1568]: I20250711 00:28:07.168123 1568 update_check_scheduler.cc:74] Next update check in 3m33s Jul 11 00:28:07.164161 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:28:07.169178 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:28:07.169574 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:28:07.177101 jq[1569]: true Jul 11 00:28:07.204814 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:28:07.205291 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:28:07.207160 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:28:07.211715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1577) Jul 11 00:28:07.260805 jq[1578]: true Jul 11 00:28:07.260994 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:28:07.260994 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:28:07.260994 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:28:07.212544 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:28:07.285464 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Jul 11 00:28:07.240317 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:28:07.243544 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:28:07.243643 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:28:07.243669 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:28:07.246881 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:28:07.246913 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:28:07.253744 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:28:07.263751 systemd-logind[1559]: Watching system buttons on /dev/input/event1 (Power Button) Jul 11 00:28:07.263854 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:28:07.268190 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:28:07.269263 systemd-logind[1559]: New seat seat0. Jul 11 00:28:07.292370 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:28:07.293882 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:28:07.297093 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:28:07.312771 tar[1574]: linux-amd64/helm Jul 11 00:28:07.334412 bash[1621]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:28:07.342742 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:28:07.348830 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:28:07.352588 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:28:07.381244 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:28:07.382195 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:28:07.395383 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:28:07.409926 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:28:07.410282 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:28:07.421181 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:28:07.437069 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:28:07.450274 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:28:07.467225 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:28:07.469348 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:28:07.488228 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:28:07.495119 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:41542.service - OpenSSH per-connection server daemon (10.0.0.1:41542). Jul 11 00:28:07.525263 containerd[1580]: time="2025-07-11T00:28:07.525142572Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:28:07.551975 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 41542 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:28:07.553746 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:28:07.563790 containerd[1580]: time="2025-07-11T00:28:07.563741091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:28:07.567345 systemd-logind[1559]: New session 1 of user core. Jul 11 00:28:07.568425 containerd[1580]: time="2025-07-11T00:28:07.568362915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:28:07.568477 containerd[1580]: time="2025-07-11T00:28:07.568429410Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:28:07.568477 containerd[1580]: time="2025-07-11T00:28:07.568453412Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:28:07.568776 containerd[1580]: time="2025-07-11T00:28:07.568743938Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:28:07.568817 containerd[1580]: time="2025-07-11T00:28:07.568775272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:28:07.569913 containerd[1580]: time="2025-07-11T00:28:07.568869747Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:28:07.569913 containerd[1580]: time="2025-07-11T00:28:07.568889762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:28:07.569913 containerd[1580]: time="2025-07-11T00:28:07.569240753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:28:07.569913 containerd[1580]: time="2025-07-11T00:28:07.569261459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:28:07.569913 containerd[1580]: time="2025-07-11T00:28:07.569279320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:28:07.569913 containerd[1580]: time="2025-07-11T00:28:07.569312698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:28:07.569913 containerd[1580]: time="2025-07-11T00:28:07.569429461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:28:07.569913 containerd[1580]: time="2025-07-11T00:28:07.569750971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:28:07.569547 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:28:07.570200 containerd[1580]: time="2025-07-11T00:28:07.569994405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:28:07.570200 containerd[1580]: time="2025-07-11T00:28:07.570016613Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:28:07.570200 containerd[1580]: time="2025-07-11T00:28:07.570149405Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:28:07.570276 containerd[1580]: time="2025-07-11T00:28:07.570223954Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:28:07.576102 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:28:07.589003 containerd[1580]: time="2025-07-11T00:28:07.588950847Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:28:07.589293 containerd[1580]: time="2025-07-11T00:28:07.589178884Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:28:07.589554 containerd[1580]: time="2025-07-11T00:28:07.589494744Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:28:07.589554 containerd[1580]: time="2025-07-11T00:28:07.589527231Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:28:07.589692 containerd[1580]: time="2025-07-11T00:28:07.589642431Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.589994253Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590404887Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590532390Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590549650Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590571859Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590588828Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590611187Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590713045Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590737657Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590757402Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590777206Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590792583Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590807018Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:28:07.591805 containerd[1580]: time="2025-07-11T00:28:07.590830279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.590847638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.590863386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.590880255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.590894871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.590911300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.590926626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.590942273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.590957800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.590975241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.590990227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.591007056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.591026029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.591050913Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.591077569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592252 containerd[1580]: time="2025-07-11T00:28:07.591093947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592625 containerd[1580]: time="2025-07-11T00:28:07.591108954Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:28:07.592625 containerd[1580]: time="2025-07-11T00:28:07.591163949Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:28:07.592625 containerd[1580]: time="2025-07-11T00:28:07.591184505Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:28:07.592625 containerd[1580]: time="2025-07-11T00:28:07.591197698Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:28:07.592625 containerd[1580]: time="2025-07-11T00:28:07.591214037Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:28:07.592625 containerd[1580]: time="2025-07-11T00:28:07.591227360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592625 containerd[1580]: time="2025-07-11T00:28:07.591244128Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:28:07.592625 containerd[1580]: time="2025-07-11T00:28:07.591258934Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:28:07.592625 containerd[1580]: time="2025-07-11T00:28:07.591278879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:28:07.592974 containerd[1580]: time="2025-07-11T00:28:07.591655876Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:28:07.593342 containerd[1580]: time="2025-07-11T00:28:07.593325291Z" level=info msg="Connect containerd service" Jul 11 00:28:07.593457 containerd[1580]: time="2025-07-11T00:28:07.593442615Z" level=info msg="using legacy CRI server" Jul 11 00:28:07.593524 containerd[1580]: time="2025-07-11T00:28:07.593511755Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:28:07.593959 containerd[1580]: time="2025-07-11T00:28:07.593938768Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:28:07.595224 containerd[1580]: time="2025-07-11T00:28:07.595199142Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:28:07.595477 containerd[1580]: time="2025-07-11T00:28:07.595412704Z" level=info msg="Start subscribing containerd event" Jul 11 00:28:07.595573 containerd[1580]: time="2025-07-11T00:28:07.595560221Z" level=info msg="Start recovering state" Jul 11 00:28:07.595782 containerd[1580]: time="2025-07-11T00:28:07.595752075Z" level=info msg="Start event monitor" Jul 11 00:28:07.595886 containerd[1580]: time="2025-07-11T00:28:07.595872454Z" level=info msg="Start snapshots syncer" Jul 11 00:28:07.595980 containerd[1580]: time="2025-07-11T00:28:07.595967931Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:28:07.596136 containerd[1580]: time="2025-07-11T00:28:07.596122480Z" level=info msg="Start streaming server" Jul 11 00:28:07.596826 containerd[1580]: time="2025-07-11T00:28:07.596798105Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:28:07.597006 containerd[1580]: time="2025-07-11T00:28:07.596991232Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:28:07.597127 containerd[1580]: time="2025-07-11T00:28:07.597114786Z" level=info msg="containerd successfully booted in 0.074150s" Jul 11 00:28:07.600325 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:28:07.603197 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:28:07.617824 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:28:07.631628 (systemd)[1666]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:28:07.941269 systemd[1666]: Queued start job for default target default.target. Jul 11 00:28:07.943040 systemd[1666]: Created slice app.slice - User Application Slice. Jul 11 00:28:07.943067 systemd[1666]: Reached target paths.target - Paths. Jul 11 00:28:07.943083 systemd[1666]: Reached target timers.target - Timers. Jul 11 00:28:07.949879 systemd[1666]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:28:07.964198 systemd[1666]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:28:07.964278 systemd[1666]: Reached target sockets.target - Sockets. Jul 11 00:28:07.964294 systemd[1666]: Reached target basic.target - Basic System. Jul 11 00:28:07.964337 systemd[1666]: Reached target default.target - Main User Target. Jul 11 00:28:07.964373 systemd[1666]: Startup finished in 319ms. Jul 11 00:28:07.965137 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:28:07.974184 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:28:08.055740 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:41552.service - OpenSSH per-connection server daemon (10.0.0.1:41552). Jul 11 00:28:08.102449 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 41552 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:28:08.105363 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:28:08.143693 tar[1574]: linux-amd64/LICENSE Jul 11 00:28:08.143693 tar[1574]: linux-amd64/README.md Jul 11 00:28:08.161932 systemd-logind[1559]: New session 2 of user core. Jul 11 00:28:08.163518 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:28:08.166945 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:28:08.235087 sshd[1678]: pam_unix(sshd:session): session closed for user core Jul 11 00:28:08.245160 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:41568.service - OpenSSH per-connection server daemon (10.0.0.1:41568). Jul 11 00:28:08.248759 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:41552.service: Deactivated successfully. Jul 11 00:28:08.251902 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:28:08.254367 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:28:08.256192 systemd-logind[1559]: Removed session 2. Jul 11 00:28:08.288298 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 41568 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:28:08.291067 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:28:08.296097 systemd-logind[1559]: New session 3 of user core. Jul 11 00:28:08.330220 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:28:08.391466 sshd[1688]: pam_unix(sshd:session): session closed for user core Jul 11 00:28:08.399277 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:41568.service: Deactivated successfully. Jul 11 00:28:08.401910 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:28:08.402042 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:28:08.403005 systemd-logind[1559]: Removed session 3. Jul 11 00:28:09.479891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:28:09.481943 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:28:09.484826 systemd[1]: Startup finished in 8.591s (kernel) + 8.152s (userspace) = 16.743s. Jul 11 00:28:09.486864 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:28:10.299693 kubelet[1707]: E0711 00:28:10.299614 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:28:10.303050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:28:10.303433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:28:18.405027 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:45188.service - OpenSSH per-connection server daemon (10.0.0.1:45188). Jul 11 00:28:18.440904 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 45188 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:28:18.442712 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:28:18.447440 systemd-logind[1559]: New session 4 of user core. Jul 11 00:28:18.457995 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:28:18.515015 sshd[1720]: pam_unix(sshd:session): session closed for user core Jul 11 00:28:18.530139 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:45202.service - OpenSSH per-connection server daemon (10.0.0.1:45202). Jul 11 00:28:18.530734 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:45188.service: Deactivated successfully. Jul 11 00:28:18.533763 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:28:18.535762 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:28:18.536965 systemd-logind[1559]: Removed session 4. Jul 11 00:28:18.564431 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 45202 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:28:18.566124 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:28:18.570353 systemd-logind[1559]: New session 5 of user core. Jul 11 00:28:18.579923 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:28:18.631770 sshd[1725]: pam_unix(sshd:session): session closed for user core Jul 11 00:28:18.645988 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:45212.service - OpenSSH per-connection server daemon (10.0.0.1:45212). Jul 11 00:28:18.646490 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:45202.service: Deactivated successfully. Jul 11 00:28:18.650251 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:28:18.651020 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:28:18.651959 systemd-logind[1559]: Removed session 5. Jul 11 00:28:18.682738 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 45212 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:28:18.684584 sshd[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:28:18.690267 systemd-logind[1559]: New session 6 of user core. Jul 11 00:28:18.709201 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:28:18.766792 sshd[1733]: pam_unix(sshd:session): session closed for user core Jul 11 00:28:18.777257 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:45222.service - OpenSSH per-connection server daemon (10.0.0.1:45222). Jul 11 00:28:18.777948 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:45212.service: Deactivated successfully. Jul 11 00:28:18.782054 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:28:18.783084 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:28:18.784392 systemd-logind[1559]: Removed session 6. Jul 11 00:28:18.811055 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 45222 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:28:18.813348 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:28:18.817884 systemd-logind[1559]: New session 7 of user core. Jul 11 00:28:18.827957 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:28:18.886302 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:28:18.886642 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:28:18.901583 sudo[1748]: pam_unix(sudo:session): session closed for user root Jul 11 00:28:18.904067 sshd[1741]: pam_unix(sshd:session): session closed for user core Jul 11 00:28:18.913001 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:45224.service - OpenSSH per-connection server daemon (10.0.0.1:45224). Jul 11 00:28:18.913620 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:45222.service: Deactivated successfully. Jul 11 00:28:18.916868 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:28:18.917668 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:28:18.918818 systemd-logind[1559]: Removed session 7. Jul 11 00:28:18.949980 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 45224 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:28:18.951803 sshd[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:28:18.956486 systemd-logind[1559]: New session 8 of user core. Jul 11 00:28:18.965926 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:28:19.022866 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:28:19.023311 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:28:19.029111 sudo[1758]: pam_unix(sudo:session): session closed for user root Jul 11 00:28:19.036275 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:28:19.036640 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:28:19.058106 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:28:19.060423 auditctl[1761]: No rules Jul 11 00:28:19.060979 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:28:19.061327 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:28:19.064515 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:28:19.100068 augenrules[1780]: No rules Jul 11 00:28:19.102499 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:28:19.103936 sudo[1757]: pam_unix(sudo:session): session closed for user root Jul 11 00:28:19.106080 sshd[1750]: pam_unix(sshd:session): session closed for user core Jul 11 00:28:19.118039 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:45230.service - OpenSSH per-connection server daemon (10.0.0.1:45230). Jul 11 00:28:19.118554 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:45224.service: Deactivated successfully. Jul 11 00:28:19.121809 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:28:19.123277 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:28:19.124636 systemd-logind[1559]: Removed session 8. Jul 11 00:28:19.151369 sshd[1786]: Accepted publickey for core from 10.0.0.1 port 45230 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:28:19.153236 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:28:19.157868 systemd-logind[1559]: New session 9 of user core. Jul 11 00:28:19.168980 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:28:19.223814 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:28:19.224250 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:28:19.835986 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:28:19.837012 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:28:20.210512 dockerd[1812]: time="2025-07-11T00:28:20.210337005Z" level=info msg="Starting up" Jul 11 00:28:20.470451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:28:20.485139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:28:21.612903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:28:21.619096 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:28:21.719754 kubelet[1847]: E0711 00:28:21.719644 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:28:21.727581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:28:21.727944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:28:23.047132 dockerd[1812]: time="2025-07-11T00:28:23.047070879Z" level=info msg="Loading containers: start." Jul 11 00:28:23.917699 kernel: Initializing XFRM netlink socket Jul 11 00:28:24.009629 systemd-networkd[1247]: docker0: Link UP Jul 11 00:28:24.298386 dockerd[1812]: time="2025-07-11T00:28:24.298249029Z" level=info msg="Loading containers: done." Jul 11 00:28:24.315081 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1295920467-merged.mount: Deactivated successfully. Jul 11 00:28:24.997966 dockerd[1812]: time="2025-07-11T00:28:24.997868415Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:28:24.998167 dockerd[1812]: time="2025-07-11T00:28:24.998014973Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:28:24.998203 dockerd[1812]: time="2025-07-11T00:28:24.998185745Z" level=info msg="Daemon has completed initialization" Jul 11 00:28:27.188921 dockerd[1812]: time="2025-07-11T00:28:27.188759455Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:28:27.189175 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:28:28.192118 containerd[1580]: time="2025-07-11T00:28:28.192068418Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 00:28:31.978129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:28:31.993854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:28:32.166472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:28:32.171984 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:28:32.290519 kubelet[1995]: E0711 00:28:32.290365 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:28:32.294412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:28:32.294728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:28:35.726995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897981230.mount: Deactivated successfully. Jul 11 00:28:41.555160 containerd[1580]: time="2025-07-11T00:28:41.555060023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:41.558698 containerd[1580]: time="2025-07-11T00:28:41.558637350Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 11 00:28:41.564265 containerd[1580]: time="2025-07-11T00:28:41.564203068Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:41.569236 containerd[1580]: time="2025-07-11T00:28:41.569168188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:41.570695 containerd[1580]: time="2025-07-11T00:28:41.570639736Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 13.378522007s" Jul 11 00:28:41.570749 containerd[1580]: time="2025-07-11T00:28:41.570702583Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 11 00:28:41.571379 containerd[1580]: time="2025-07-11T00:28:41.571351792Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 00:28:42.461910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 11 00:28:42.471882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:28:42.658108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:28:42.662825 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:28:43.028751 kubelet[2070]: E0711 00:28:43.028651 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:28:43.033154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:28:43.033445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:28:45.030862 containerd[1580]: time="2025-07-11T00:28:45.030777907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:45.052976 containerd[1580]: time="2025-07-11T00:28:45.052882890Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 11 00:28:45.069600 containerd[1580]: time="2025-07-11T00:28:45.069538993Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:45.092711 containerd[1580]: time="2025-07-11T00:28:45.092621319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:45.094541 containerd[1580]: time="2025-07-11T00:28:45.094476858Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 3.523079663s" Jul 11 00:28:45.094624 containerd[1580]: time="2025-07-11T00:28:45.094550034Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 11 00:28:45.095250 containerd[1580]: time="2025-07-11T00:28:45.095220985Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 00:28:47.296665 containerd[1580]: time="2025-07-11T00:28:47.296550124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:47.298461 containerd[1580]: time="2025-07-11T00:28:47.297807019Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 11 00:28:47.299755 containerd[1580]: time="2025-07-11T00:28:47.299691846Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:47.306794 containerd[1580]: time="2025-07-11T00:28:47.306724156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:47.308025 containerd[1580]: time="2025-07-11T00:28:47.307973248Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.212712698s" Jul 11 00:28:47.308093 containerd[1580]: time="2025-07-11T00:28:47.308028630Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 11 00:28:47.308613 containerd[1580]: time="2025-07-11T00:28:47.308547950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 00:28:50.706919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131540474.mount: Deactivated successfully. Jul 11 00:28:52.771947 update_engine[1568]: I20250711 00:28:52.771818 1568 update_attempter.cc:509] Updating boot flags... Jul 11 00:28:52.886103 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2098) Jul 11 00:28:52.938712 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2099) Jul 11 00:28:52.980502 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2099) Jul 11 00:28:53.211773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 11 00:28:53.219946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:28:53.394088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:28:53.401437 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:28:54.388178 kubelet[2123]: E0711 00:28:54.388102 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:28:54.392555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:28:54.392929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:28:54.905270 containerd[1580]: time="2025-07-11T00:28:54.905181586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:54.911785 containerd[1580]: time="2025-07-11T00:28:54.911701476Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 11 00:28:54.919468 containerd[1580]: time="2025-07-11T00:28:54.919364532Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:54.929417 containerd[1580]: time="2025-07-11T00:28:54.929332257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:54.930326 containerd[1580]: time="2025-07-11T00:28:54.930252938Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 7.621654545s" Jul 11 00:28:54.930326 containerd[1580]: time="2025-07-11T00:28:54.930310606Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 11 00:28:54.930997 containerd[1580]: time="2025-07-11T00:28:54.930940593Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:28:56.356004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034383886.mount: Deactivated successfully. Jul 11 00:28:58.000995 containerd[1580]: time="2025-07-11T00:28:58.000927678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:58.001940 containerd[1580]: time="2025-07-11T00:28:58.001867266Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 11 00:28:58.004116 containerd[1580]: time="2025-07-11T00:28:58.004001210Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:58.008104 containerd[1580]: time="2025-07-11T00:28:58.008067207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:58.009922 containerd[1580]: time="2025-07-11T00:28:58.009874158Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.078897957s" Jul 11 00:28:58.009922 containerd[1580]: time="2025-07-11T00:28:58.009915004Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 11 00:28:58.011004 containerd[1580]: time="2025-07-11T00:28:58.010818194Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:28:58.524739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686456463.mount: Deactivated successfully. Jul 11 00:28:58.672120 containerd[1580]: time="2025-07-11T00:28:58.672020882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:58.679134 containerd[1580]: time="2025-07-11T00:28:58.679015930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:28:58.681697 containerd[1580]: time="2025-07-11T00:28:58.681582302Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:58.685116 containerd[1580]: time="2025-07-11T00:28:58.684981412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:28:58.685826 containerd[1580]: time="2025-07-11T00:28:58.685767061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 674.91824ms" Jul 11 00:28:58.685826 containerd[1580]: time="2025-07-11T00:28:58.685813508Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:28:58.686572 containerd[1580]: time="2025-07-11T00:28:58.686518277Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 00:28:59.582037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount109327459.mount: Deactivated successfully. Jul 11 00:29:04.461705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 11 00:29:04.474922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:29:04.737151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:29:04.742949 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:29:04.979832 kubelet[2252]: E0711 00:29:04.979705 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:29:04.983782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:29:04.984068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:29:06.178397 containerd[1580]: time="2025-07-11T00:29:06.178315758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:06.190741 containerd[1580]: time="2025-07-11T00:29:06.190635494Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 11 00:29:06.197553 containerd[1580]: time="2025-07-11T00:29:06.197459305Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:06.204107 containerd[1580]: time="2025-07-11T00:29:06.203990940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:06.205716 containerd[1580]: time="2025-07-11T00:29:06.205635420Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 7.519081126s" Jul 11 00:29:06.206220 containerd[1580]: time="2025-07-11T00:29:06.205894936Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 11 00:29:08.175501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:29:08.209971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:29:08.236227 systemd[1]: Reloading requested from client PID 2294 ('systemctl') (unit session-9.scope)... Jul 11 00:29:08.236248 systemd[1]: Reloading... Jul 11 00:29:08.328716 zram_generator::config[2339]: No configuration found. Jul 11 00:29:09.726201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:29:09.804247 systemd[1]: Reloading finished in 1567 ms. Jul 11 00:29:09.845546 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:29:09.845652 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:29:09.846173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:29:09.847914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:29:10.031523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:29:10.037740 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:29:10.244228 kubelet[2393]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:29:10.244228 kubelet[2393]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:29:10.244228 kubelet[2393]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:29:10.244774 kubelet[2393]: I0711 00:29:10.244253 2393 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:29:10.622456 kubelet[2393]: I0711 00:29:10.622389 2393 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:29:10.622456 kubelet[2393]: I0711 00:29:10.622436 2393 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:29:10.622792 kubelet[2393]: I0711 00:29:10.622760 2393 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:29:10.721280 kubelet[2393]: E0711 00:29:10.721228 2393 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:10.722139 kubelet[2393]: I0711 00:29:10.722119 2393 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:29:10.801041 kubelet[2393]: E0711 00:29:10.800999 2393 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:29:10.801041 kubelet[2393]: I0711 00:29:10.801033 2393 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:29:10.808401 kubelet[2393]: I0711 00:29:10.808352 2393 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:29:10.808686 kubelet[2393]: I0711 00:29:10.808652 2393 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:29:10.808841 kubelet[2393]: I0711 00:29:10.808795 2393 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:29:10.808990 kubelet[2393]: I0711 00:29:10.808827 2393 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:29:10.809147 kubelet[2393]: I0711 00:29:10.808995 2393 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:29:10.809147 kubelet[2393]: I0711 00:29:10.809004 2393 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:29:10.809147 kubelet[2393]: I0711 00:29:10.809120 2393 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:29:10.813671 kubelet[2393]: W0711 00:29:10.813604 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:10.813728 kubelet[2393]: E0711 00:29:10.813693 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:10.814132 kubelet[2393]: I0711 00:29:10.814092 2393 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:29:10.814173 kubelet[2393]: I0711 00:29:10.814134 2393 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:29:10.814198 kubelet[2393]: I0711 00:29:10.814180 2393 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:29:10.814221 kubelet[2393]: I0711 00:29:10.814205 2393 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:29:10.815988 kubelet[2393]: W0711 00:29:10.815945 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:10.816035 kubelet[2393]: E0711 00:29:10.815998 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:10.826905 kubelet[2393]: I0711 00:29:10.826872 2393 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:29:10.827567 kubelet[2393]: I0711 00:29:10.827547 2393 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:29:10.827642 kubelet[2393]: W0711 00:29:10.827622 2393 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:29:10.837630 kubelet[2393]: I0711 00:29:10.837460 2393 server.go:1274] "Started kubelet" Jul 11 00:29:10.838078 kubelet[2393]: I0711 00:29:10.838003 2393 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:29:10.838127 kubelet[2393]: I0711 00:29:10.838090 2393 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:29:10.838690 kubelet[2393]: I0711 00:29:10.838436 2393 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:29:10.839095 kubelet[2393]: I0711 00:29:10.839057 2393 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:29:10.839142 kubelet[2393]: I0711 00:29:10.839119 2393 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:29:10.840702 kubelet[2393]: I0711 00:29:10.840490 2393 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:29:10.843372 kubelet[2393]: I0711 00:29:10.842491 2393 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:29:10.843372 kubelet[2393]: I0711 00:29:10.842611 2393 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:29:10.843372 kubelet[2393]: I0711 00:29:10.842666 2393 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:29:10.843372 kubelet[2393]: W0711 00:29:10.843002 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:10.843372 kubelet[2393]: E0711 00:29:10.843043 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:10.843372 kubelet[2393]: E0711 00:29:10.843091 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:10.843372 kubelet[2393]: E0711 00:29:10.843162 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Jul 11 00:29:10.843372 kubelet[2393]: I0711 00:29:10.843210 2393 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:29:10.843372 kubelet[2393]: I0711 00:29:10.843286 2393 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:29:10.843372 kubelet[2393]: E0711 00:29:10.843312 2393 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:29:10.844153 kubelet[2393]: I0711 00:29:10.844131 2393 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:29:10.862364 kubelet[2393]: I0711 00:29:10.862301 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:29:10.863897 kubelet[2393]: I0711 00:29:10.863863 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:29:10.863897 kubelet[2393]: I0711 00:29:10.863889 2393 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:29:10.864060 kubelet[2393]: I0711 00:29:10.863943 2393 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:29:10.864060 kubelet[2393]: E0711 00:29:10.864013 2393 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:29:10.869288 kubelet[2393]: W0711 00:29:10.869236 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:10.869412 kubelet[2393]: E0711 00:29:10.869293 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:10.871813 kubelet[2393]: I0711 00:29:10.871659 2393 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:29:10.871813 kubelet[2393]: I0711 00:29:10.871689 2393 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:29:10.871813 kubelet[2393]: I0711 00:29:10.871717 2393 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:29:10.943849 kubelet[2393]: E0711 00:29:10.943708 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:10.964929 kubelet[2393]: E0711 00:29:10.964885 2393 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:29:11.044336 kubelet[2393]: E0711 00:29:11.044241 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:11.044650 kubelet[2393]: E0711 00:29:11.044617 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Jul 11 00:29:11.144805 kubelet[2393]: E0711 00:29:11.144730 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:11.165981 kubelet[2393]: E0711 00:29:11.165940 2393 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:29:11.245709 kubelet[2393]: E0711 00:29:11.245616 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:11.346427 kubelet[2393]: E0711 00:29:11.346348 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:11.445144 kubelet[2393]: E0711 00:29:11.445078 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Jul 11 00:29:11.447176 kubelet[2393]: E0711 00:29:11.447137 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:11.547885 kubelet[2393]: E0711 00:29:11.547665 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:11.567033 kubelet[2393]: E0711 00:29:11.566954 2393 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:29:11.648489 kubelet[2393]: E0711 00:29:11.648420 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:11.749082 kubelet[2393]: E0711 00:29:11.749039 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:11.794906 kubelet[2393]: W0711 00:29:11.794851 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:11.795067 kubelet[2393]: E0711 00:29:11.794915 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:11.849511 kubelet[2393]: E0711 00:29:11.849355 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:11.915347 kubelet[2393]: W0711 00:29:11.915258 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:11.915347 kubelet[2393]: E0711 00:29:11.915333 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:11.949974 kubelet[2393]: E0711 00:29:11.949915 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:11.978800 kubelet[2393]: W0711 00:29:11.978707 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:11.978800 kubelet[2393]: E0711 00:29:11.978779 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:12.050443 kubelet[2393]: E0711 00:29:12.050363 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:12.064099 kubelet[2393]: W0711 00:29:12.064034 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:12.064147 kubelet[2393]: E0711 00:29:12.064102 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:12.150871 kubelet[2393]: E0711 00:29:12.150716 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:12.245960 kubelet[2393]: E0711 00:29:12.245886 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="1.6s" Jul 11 00:29:12.250872 kubelet[2393]: E0711 00:29:12.250789 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:12.351487 kubelet[2393]: E0711 00:29:12.351431 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:12.367736 kubelet[2393]: E0711 00:29:12.367651 2393 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:29:12.452317 kubelet[2393]: E0711 00:29:12.452135 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:12.552766 kubelet[2393]: E0711 00:29:12.552708 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:12.653352 kubelet[2393]: E0711 00:29:12.653274 2393 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:29:12.700090 kubelet[2393]: E0711 00:29:12.698664 2393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510aeed052d049 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:29:10.837416009 +0000 UTC m=+0.641349747,LastTimestamp:2025-07-11 00:29:10.837416009 +0000 UTC m=+0.641349747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:29:12.711995 kubelet[2393]: I0711 00:29:12.711950 2393 policy_none.go:49] "None policy: Start" Jul 11 00:29:12.712740 kubelet[2393]: I0711 00:29:12.712722 2393 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:29:12.712794 kubelet[2393]: I0711 00:29:12.712749 2393 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:29:12.727249 kubelet[2393]: E0711 00:29:12.727180 2393 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:12.738292 kubelet[2393]: I0711 00:29:12.738254 2393 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:29:12.738542 kubelet[2393]: I0711 00:29:12.738517 2393 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:29:12.738610 kubelet[2393]: I0711 00:29:12.738542 2393 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:29:12.739530 kubelet[2393]: I0711 00:29:12.739436 2393 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:29:12.740382 kubelet[2393]: E0711 00:29:12.740340 2393 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:29:12.840561 kubelet[2393]: I0711 00:29:12.840513 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:29:12.841112 kubelet[2393]: E0711 00:29:12.841060 2393 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jul 11 00:29:13.043001 kubelet[2393]: I0711 00:29:13.042868 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:29:13.043401 kubelet[2393]: E0711 00:29:13.043364 2393 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jul 11 00:29:13.445363 kubelet[2393]: I0711 00:29:13.445215 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:29:13.445979 kubelet[2393]: E0711 00:29:13.445843 2393 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jul 11 00:29:13.590630 kubelet[2393]: W0711 00:29:13.590540 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:13.590630 kubelet[2393]: E0711 00:29:13.590628 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:13.847075 kubelet[2393]: E0711 00:29:13.847004 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="3.2s" Jul 11 00:29:14.061061 kubelet[2393]: I0711 00:29:14.060987 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:29:14.061061 kubelet[2393]: I0711 00:29:14.061064 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:29:14.061294 kubelet[2393]: I0711 00:29:14.061097 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:29:14.061294 kubelet[2393]: I0711 00:29:14.061124 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df732401e61fc7316615e7e6aea24b33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"df732401e61fc7316615e7e6aea24b33\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:29:14.061294 kubelet[2393]: I0711 00:29:14.061162 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df732401e61fc7316615e7e6aea24b33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"df732401e61fc7316615e7e6aea24b33\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:29:14.061294 kubelet[2393]: I0711 00:29:14.061221 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:29:14.061294 kubelet[2393]: I0711 00:29:14.061260 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:29:14.061463 kubelet[2393]: I0711 00:29:14.061286 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:29:14.061463 kubelet[2393]: I0711 00:29:14.061316 2393 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df732401e61fc7316615e7e6aea24b33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"df732401e61fc7316615e7e6aea24b33\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:29:14.247540 kubelet[2393]: I0711 00:29:14.247490 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:29:14.247896 kubelet[2393]: E0711 00:29:14.247855 2393 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jul 11 00:29:14.279999 kubelet[2393]: E0711 00:29:14.279943 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:14.280278 kubelet[2393]: E0711 00:29:14.280225 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:14.280278 kubelet[2393]: E0711 00:29:14.280262 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:14.280960 containerd[1580]: time="2025-07-11T00:29:14.280898788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 00:29:14.280960 containerd[1580]: time="2025-07-11T00:29:14.280944596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:df732401e61fc7316615e7e6aea24b33,Namespace:kube-system,Attempt:0,}" Jul 11 00:29:14.281498 containerd[1580]: time="2025-07-11T00:29:14.280923816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 00:29:14.342232 kubelet[2393]: W0711 00:29:14.342141 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:14.342232 kubelet[2393]: E0711 00:29:14.342237 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:14.450797 kubelet[2393]: W0711 00:29:14.450672 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:14.450797 kubelet[2393]: E0711 00:29:14.450798 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:14.482014 kubelet[2393]: W0711 00:29:14.481925 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:14.482151 kubelet[2393]: E0711 00:29:14.482015 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:15.849305 kubelet[2393]: I0711 00:29:15.849255 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:29:15.849760 kubelet[2393]: E0711 00:29:15.849734 2393 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jul 11 00:29:16.658283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772170018.mount: Deactivated successfully. Jul 11 00:29:17.030480 kubelet[2393]: E0711 00:29:17.030429 2393 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:17.048493 kubelet[2393]: E0711 00:29:17.048396 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="6.4s" Jul 11 00:29:17.413363 containerd[1580]: time="2025-07-11T00:29:17.413204750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:29:17.440242 containerd[1580]: time="2025-07-11T00:29:17.439968334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 11 00:29:17.481278 containerd[1580]: time="2025-07-11T00:29:17.481193303Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:29:17.511061 containerd[1580]: time="2025-07-11T00:29:17.510986055Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:29:17.596031 containerd[1580]: time="2025-07-11T00:29:17.595878095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:29:17.670481 containerd[1580]: time="2025-07-11T00:29:17.670264737Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:29:17.699033 containerd[1580]: time="2025-07-11T00:29:17.698844119Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:29:18.015303 containerd[1580]: time="2025-07-11T00:29:18.015212546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:29:18.016731 containerd[1580]: time="2025-07-11T00:29:18.016644144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.735598974s" Jul 11 00:29:18.018170 containerd[1580]: time="2025-07-11T00:29:18.018083216Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.737067573s" Jul 11 00:29:18.071236 containerd[1580]: time="2025-07-11T00:29:18.071160636Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.789979594s" Jul 11 00:29:18.616861 kubelet[2393]: W0711 00:29:18.616785 2393 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jul 11 00:29:18.616861 kubelet[2393]: E0711 00:29:18.616857 2393 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:29:18.925259 containerd[1580]: time="2025-07-11T00:29:18.924515597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:29:18.925259 containerd[1580]: time="2025-07-11T00:29:18.924602945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:29:18.925259 containerd[1580]: time="2025-07-11T00:29:18.924617343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:18.925259 containerd[1580]: time="2025-07-11T00:29:18.924756791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:18.927973 containerd[1580]: time="2025-07-11T00:29:18.927877140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:29:18.927973 containerd[1580]: time="2025-07-11T00:29:18.927937526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:29:18.927973 containerd[1580]: time="2025-07-11T00:29:18.927951152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:18.929188 containerd[1580]: time="2025-07-11T00:29:18.928036125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:18.946217 containerd[1580]: time="2025-07-11T00:29:18.945990368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:29:18.946217 containerd[1580]: time="2025-07-11T00:29:18.946078227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:29:18.946217 containerd[1580]: time="2025-07-11T00:29:18.946091361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:18.946548 containerd[1580]: time="2025-07-11T00:29:18.946233434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:19.032275 containerd[1580]: time="2025-07-11T00:29:19.032232048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e7004784ac28d7142d7ce833e1199065a3ae6af51993e5039191372a5e91fe2\"" Jul 11 00:29:19.034115 kubelet[2393]: E0711 00:29:19.034087 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:19.039570 containerd[1580]: time="2025-07-11T00:29:19.039530418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:df732401e61fc7316615e7e6aea24b33,Namespace:kube-system,Attempt:0,} returns sandbox id \"729bef0ab1f18e469a22cd0de3e2fe2ba47794f6f0382a192797418a29580d8b\"" Jul 11 00:29:19.040406 containerd[1580]: time="2025-07-11T00:29:19.040354209Z" level=info msg="CreateContainer within sandbox \"0e7004784ac28d7142d7ce833e1199065a3ae6af51993e5039191372a5e91fe2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:29:19.040719 kubelet[2393]: E0711 00:29:19.040688 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:19.044297 containerd[1580]: time="2025-07-11T00:29:19.044069065Z" level=info msg="CreateContainer within sandbox \"729bef0ab1f18e469a22cd0de3e2fe2ba47794f6f0382a192797418a29580d8b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:29:19.052114 kubelet[2393]: I0711 00:29:19.052077 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:29:19.052731 kubelet[2393]: E0711 00:29:19.052708 2393 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jul 11 00:29:19.075903 containerd[1580]: time="2025-07-11T00:29:19.075793672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf3496b5810c6ce7ce4a729aedb531753a62dfd937b8076c5adfa53b9123a5fe\"" Jul 11 00:29:19.076732 kubelet[2393]: E0711 00:29:19.076704 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:19.078723 containerd[1580]: time="2025-07-11T00:29:19.078653548Z" level=info msg="CreateContainer within sandbox \"0e7004784ac28d7142d7ce833e1199065a3ae6af51993e5039191372a5e91fe2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d1046bbb62a7e877910566c44e7bb784befc31ee62ac86999de4715183fadd99\"" Jul 11 00:29:19.079157 containerd[1580]: time="2025-07-11T00:29:19.079100836Z" level=info msg="CreateContainer within sandbox \"cf3496b5810c6ce7ce4a729aedb531753a62dfd937b8076c5adfa53b9123a5fe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:29:19.079360 containerd[1580]: time="2025-07-11T00:29:19.079329084Z" level=info msg="StartContainer for \"d1046bbb62a7e877910566c44e7bb784befc31ee62ac86999de4715183fadd99\"" Jul 11 00:29:19.083642 containerd[1580]: time="2025-07-11T00:29:19.083462774Z" level=info msg="CreateContainer within sandbox \"729bef0ab1f18e469a22cd0de3e2fe2ba47794f6f0382a192797418a29580d8b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f8cf51425e4b66acda5937de35ccfb9c1f95687b54e738ad2fb71a5825d1f180\"" Jul 11 00:29:19.084241 containerd[1580]: time="2025-07-11T00:29:19.084194077Z" level=info msg="StartContainer for \"f8cf51425e4b66acda5937de35ccfb9c1f95687b54e738ad2fb71a5825d1f180\"" Jul 11 00:29:19.103075 containerd[1580]: time="2025-07-11T00:29:19.102918304Z" level=info msg="CreateContainer within sandbox \"cf3496b5810c6ce7ce4a729aedb531753a62dfd937b8076c5adfa53b9123a5fe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0ffb1c2f10a2267929a248729b2bd75d177048d7b950c59e0a2a6c50446da2f\"" Jul 11 00:29:19.103834 containerd[1580]: time="2025-07-11T00:29:19.103662352Z" level=info msg="StartContainer for \"a0ffb1c2f10a2267929a248729b2bd75d177048d7b950c59e0a2a6c50446da2f\"" Jul 11 00:29:19.292489 containerd[1580]: time="2025-07-11T00:29:19.292415765Z" level=info msg="StartContainer for \"d1046bbb62a7e877910566c44e7bb784befc31ee62ac86999de4715183fadd99\" returns successfully" Jul 11 00:29:19.292663 containerd[1580]: time="2025-07-11T00:29:19.292433890Z" level=info msg="StartContainer for \"f8cf51425e4b66acda5937de35ccfb9c1f95687b54e738ad2fb71a5825d1f180\" returns successfully" Jul 11 00:29:19.292663 containerd[1580]: time="2025-07-11T00:29:19.292438859Z" level=info msg="StartContainer for \"a0ffb1c2f10a2267929a248729b2bd75d177048d7b950c59e0a2a6c50446da2f\" returns successfully" Jul 11 00:29:19.932602 kubelet[2393]: E0711 00:29:19.888917 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:19.932602 kubelet[2393]: E0711 00:29:19.891178 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:19.932602 kubelet[2393]: E0711 00:29:19.893408 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:20.914190 kubelet[2393]: E0711 00:29:20.911870 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:20.914190 kubelet[2393]: E0711 00:29:20.912213 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:20.917913 kubelet[2393]: E0711 00:29:20.914770 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:21.504271 kubelet[2393]: E0711 00:29:21.504138 2393 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18510aeed052d049 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:29:10.837416009 +0000 UTC m=+0.641349747,LastTimestamp:2025-07-11 00:29:10.837416009 +0000 UTC m=+0.641349747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:29:21.736821 kubelet[2393]: E0711 00:29:21.736563 2393 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18510aeed06c2124 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:29:10.839075108 +0000 UTC m=+0.643008846,LastTimestamp:2025-07-11 00:29:10.839075108 +0000 UTC m=+0.643008846,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:29:21.821363 kubelet[2393]: I0711 00:29:21.821205 2393 apiserver.go:52] "Watching apiserver" Jul 11 00:29:21.843565 kubelet[2393]: I0711 00:29:21.843503 2393 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:29:22.740516 kubelet[2393]: E0711 00:29:22.740471 2393 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:29:22.826818 kubelet[2393]: E0711 00:29:22.826758 2393 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 11 00:29:23.081753 kubelet[2393]: E0711 00:29:23.081614 2393 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:23.944888 kubelet[2393]: E0711 00:29:23.944367 2393 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:29:23.972620 kubelet[2393]: E0711 00:29:23.972580 2393 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 11 00:29:25.437166 kubelet[2393]: E0711 00:29:25.437108 2393 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 11 00:29:25.454809 kubelet[2393]: I0711 00:29:25.454748 2393 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:29:25.540611 kubelet[2393]: I0711 00:29:25.540562 2393 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:29:28.040969 systemd[1]: Reloading requested from client PID 2674 ('systemctl') (unit session-9.scope)... Jul 11 00:29:28.040991 systemd[1]: Reloading... Jul 11 00:29:28.127713 zram_generator::config[2714]: No configuration found. Jul 11 00:29:28.301102 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:29:28.423869 systemd[1]: Reloading finished in 382 ms. Jul 11 00:29:28.469230 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:29:28.485438 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:29:28.485943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:29:28.500283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:29:28.681893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:29:28.688722 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:29:28.731292 kubelet[2768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:29:28.731292 kubelet[2768]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:29:28.731292 kubelet[2768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:29:28.731937 kubelet[2768]: I0711 00:29:28.731409 2768 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:29:28.740857 kubelet[2768]: I0711 00:29:28.740814 2768 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:29:28.740857 kubelet[2768]: I0711 00:29:28.740842 2768 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:29:28.741164 kubelet[2768]: I0711 00:29:28.741144 2768 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:29:28.742421 kubelet[2768]: I0711 00:29:28.742396 2768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:29:28.744457 kubelet[2768]: I0711 00:29:28.744405 2768 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:29:28.748462 kubelet[2768]: E0711 00:29:28.748414 2768 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:29:28.748462 kubelet[2768]: I0711 00:29:28.748458 2768 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:29:28.755644 kubelet[2768]: I0711 00:29:28.755486 2768 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:29:28.757967 kubelet[2768]: I0711 00:29:28.756367 2768 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:29:28.757967 kubelet[2768]: I0711 00:29:28.756533 2768 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:29:28.757967 kubelet[2768]: I0711 00:29:28.756567 2768 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:29:28.757967 kubelet[2768]: I0711 00:29:28.756822 2768 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:29:28.758214 kubelet[2768]: I0711 00:29:28.756837 2768 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:29:28.758214 kubelet[2768]: I0711 00:29:28.756870 2768 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:29:28.758214 kubelet[2768]: I0711 00:29:28.757008 2768 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:29:28.758214 kubelet[2768]: I0711 00:29:28.757033 2768 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:29:28.758214 kubelet[2768]: I0711 00:29:28.757080 2768 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:29:28.758214 kubelet[2768]: I0711 00:29:28.757094 2768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:29:28.760478 kubelet[2768]: I0711 00:29:28.760058 2768 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:29:28.760559 kubelet[2768]: I0711 00:29:28.760532 2768 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:29:28.762452 kubelet[2768]: I0711 00:29:28.762373 2768 server.go:1274] "Started kubelet" Jul 11 00:29:28.765494 kubelet[2768]: I0711 00:29:28.765459 2768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:29:28.773514 kubelet[2768]: I0711 00:29:28.772882 2768 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:29:28.774611 kubelet[2768]: I0711 00:29:28.774548 2768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:29:28.775366 kubelet[2768]: I0711 00:29:28.775048 2768 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:29:28.776353 kubelet[2768]: I0711 00:29:28.775988 2768 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:29:28.776737 kubelet[2768]: I0711 00:29:28.776713 2768 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:29:28.777129 kubelet[2768]: I0711 00:29:28.776990 2768 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:29:28.777129 kubelet[2768]: I0711 00:29:28.777091 2768 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:29:28.777329 kubelet[2768]: E0711 00:29:28.777299 2768 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:29:28.777968 kubelet[2768]: I0711 00:29:28.777937 2768 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:29:28.778229 kubelet[2768]: I0711 00:29:28.778102 2768 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:29:28.779261 kubelet[2768]: I0711 00:29:28.779217 2768 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:29:28.782498 kubelet[2768]: I0711 00:29:28.781959 2768 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:29:28.798713 kubelet[2768]: I0711 00:29:28.798632 2768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:29:28.827088 kubelet[2768]: I0711 00:29:28.821333 2768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:29:28.827088 kubelet[2768]: I0711 00:29:28.821371 2768 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:29:28.827261 kubelet[2768]: I0711 00:29:28.827190 2768 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:29:28.827327 kubelet[2768]: E0711 00:29:28.827307 2768 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:29:28.885867 kubelet[2768]: I0711 00:29:28.885826 2768 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:29:28.885867 kubelet[2768]: I0711 00:29:28.885856 2768 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:29:28.886019 kubelet[2768]: I0711 00:29:28.885888 2768 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:29:28.886132 kubelet[2768]: I0711 00:29:28.886110 2768 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:29:28.886154 kubelet[2768]: I0711 00:29:28.886130 2768 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:29:28.886175 kubelet[2768]: I0711 00:29:28.886156 2768 policy_none.go:49] "None policy: Start" Jul 11 00:29:28.887004 kubelet[2768]: I0711 00:29:28.886982 2768 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:29:28.887052 kubelet[2768]: I0711 00:29:28.887008 2768 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:29:28.887196 kubelet[2768]: I0711 00:29:28.887168 2768 state_mem.go:75] "Updated machine memory state" Jul 11 00:29:28.889220 kubelet[2768]: I0711 00:29:28.889072 2768 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:29:28.889359 kubelet[2768]: I0711 00:29:28.889301 2768 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:29:28.889359 kubelet[2768]: I0711 00:29:28.889316 2768 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:29:28.889710 kubelet[2768]: I0711 00:29:28.889543 2768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:29:28.977458 kubelet[2768]: I0711 00:29:28.977406 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df732401e61fc7316615e7e6aea24b33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"df732401e61fc7316615e7e6aea24b33\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:29:28.977592 kubelet[2768]: I0711 00:29:28.977470 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df732401e61fc7316615e7e6aea24b33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"df732401e61fc7316615e7e6aea24b33\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:29:28.977592 kubelet[2768]: I0711 00:29:28.977549 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df732401e61fc7316615e7e6aea24b33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"df732401e61fc7316615e7e6aea24b33\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:29:28.977696 kubelet[2768]: I0711 00:29:28.977601 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:29:28.977696 kubelet[2768]: I0711 00:29:28.977629 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:29:28.977696 kubelet[2768]: I0711 00:29:28.977651 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:29:28.977696 kubelet[2768]: I0711 00:29:28.977671 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:29:28.977827 kubelet[2768]: I0711 00:29:28.977729 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:29:28.977827 kubelet[2768]: I0711 00:29:28.977752 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:29:28.996795 kubelet[2768]: I0711 00:29:28.996759 2768 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:29:29.179504 kubelet[2768]: I0711 00:29:29.179449 2768 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 00:29:29.179706 kubelet[2768]: I0711 00:29:29.179606 2768 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:29:29.325445 kubelet[2768]: E0711 00:29:29.325241 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:29.325445 kubelet[2768]: E0711 00:29:29.325342 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:29.325445 kubelet[2768]: E0711 00:29:29.325401 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:29.758392 kubelet[2768]: I0711 00:29:29.758339 2768 apiserver.go:52] "Watching apiserver" Jul 11 00:29:29.777392 kubelet[2768]: I0711 00:29:29.777355 2768 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:29:29.846204 kubelet[2768]: E0711 00:29:29.846145 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:29.846204 kubelet[2768]: E0711 00:29:29.846180 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:29.846536 kubelet[2768]: E0711 00:29:29.846480 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:29.995311 kubelet[2768]: I0711 00:29:29.995237 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.995218307 podStartE2EDuration="1.995218307s" podCreationTimestamp="2025-07-11 00:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:29:29.995033895 +0000 UTC m=+1.300524040" watchObservedRunningTime="2025-07-11 00:29:29.995218307 +0000 UTC m=+1.300708452" Jul 11 00:29:30.157515 kubelet[2768]: I0711 00:29:30.157358 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.157334852 podStartE2EDuration="2.157334852s" podCreationTimestamp="2025-07-11 00:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:29:30.13104369 +0000 UTC m=+1.436533835" watchObservedRunningTime="2025-07-11 00:29:30.157334852 +0000 UTC m=+1.462824997" Jul 11 00:29:30.275194 kubelet[2768]: I0711 00:29:30.274947 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.274902841 podStartE2EDuration="2.274902841s" podCreationTimestamp="2025-07-11 00:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:29:30.157720297 +0000 UTC m=+1.463210462" watchObservedRunningTime="2025-07-11 00:29:30.274902841 +0000 UTC m=+1.580392986" Jul 11 00:29:30.847526 kubelet[2768]: E0711 00:29:30.847489 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:31.814105 kubelet[2768]: E0711 00:29:31.814004 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:31.850335 kubelet[2768]: E0711 00:29:31.849745 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:32.221917 kubelet[2768]: I0711 00:29:32.221744 2768 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:29:32.222441 containerd[1580]: time="2025-07-11T00:29:32.222273008Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:29:32.222959 kubelet[2768]: I0711 00:29:32.222470 2768 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:29:32.400320 kubelet[2768]: I0711 00:29:32.400036 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4605b660-5179-40df-b8b0-7a98c3dea4ba-xtables-lock\") pod \"kube-proxy-vfqqz\" (UID: \"4605b660-5179-40df-b8b0-7a98c3dea4ba\") " pod="kube-system/kube-proxy-vfqqz" Jul 11 00:29:32.400320 kubelet[2768]: I0711 00:29:32.400081 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4605b660-5179-40df-b8b0-7a98c3dea4ba-lib-modules\") pod \"kube-proxy-vfqqz\" (UID: \"4605b660-5179-40df-b8b0-7a98c3dea4ba\") " pod="kube-system/kube-proxy-vfqqz" Jul 11 00:29:32.400320 kubelet[2768]: I0711 00:29:32.400135 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4605b660-5179-40df-b8b0-7a98c3dea4ba-kube-proxy\") pod \"kube-proxy-vfqqz\" (UID: \"4605b660-5179-40df-b8b0-7a98c3dea4ba\") " pod="kube-system/kube-proxy-vfqqz" Jul 11 00:29:32.400320 kubelet[2768]: I0711 00:29:32.400187 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkzw8\" (UniqueName: \"kubernetes.io/projected/4605b660-5179-40df-b8b0-7a98c3dea4ba-kube-api-access-hkzw8\") pod \"kube-proxy-vfqqz\" (UID: \"4605b660-5179-40df-b8b0-7a98c3dea4ba\") " pod="kube-system/kube-proxy-vfqqz" Jul 11 00:29:32.509059 kubelet[2768]: E0711 00:29:32.508562 2768 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 11 00:29:32.509059 kubelet[2768]: E0711 00:29:32.508606 2768 projected.go:194] Error preparing data for projected volume kube-api-access-hkzw8 for pod kube-system/kube-proxy-vfqqz: configmap "kube-root-ca.crt" not found Jul 11 00:29:32.509059 kubelet[2768]: E0711 00:29:32.508729 2768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4605b660-5179-40df-b8b0-7a98c3dea4ba-kube-api-access-hkzw8 podName:4605b660-5179-40df-b8b0-7a98c3dea4ba nodeName:}" failed. No retries permitted until 2025-07-11 00:29:33.008702221 +0000 UTC m=+4.314192366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hkzw8" (UniqueName: "kubernetes.io/projected/4605b660-5179-40df-b8b0-7a98c3dea4ba-kube-api-access-hkzw8") pod "kube-proxy-vfqqz" (UID: "4605b660-5179-40df-b8b0-7a98c3dea4ba") : configmap "kube-root-ca.crt" not found Jul 11 00:29:33.297410 kubelet[2768]: E0711 00:29:33.297299 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:33.299619 containerd[1580]: time="2025-07-11T00:29:33.298122539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vfqqz,Uid:4605b660-5179-40df-b8b0-7a98c3dea4ba,Namespace:kube-system,Attempt:0,}" Jul 11 00:29:33.378039 containerd[1580]: time="2025-07-11T00:29:33.377809001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:29:33.378039 containerd[1580]: time="2025-07-11T00:29:33.377985106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:29:33.378278 containerd[1580]: time="2025-07-11T00:29:33.378042446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:33.378278 containerd[1580]: time="2025-07-11T00:29:33.378178535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:33.407665 kubelet[2768]: I0711 00:29:33.407601 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpf87\" (UniqueName: \"kubernetes.io/projected/2dd658c0-4c8d-40a9-b3e5-dd3f07043621-kube-api-access-hpf87\") pod \"tigera-operator-5bf8dfcb4-w764w\" (UID: \"2dd658c0-4c8d-40a9-b3e5-dd3f07043621\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-w764w" Jul 11 00:29:33.407665 kubelet[2768]: I0711 00:29:33.407660 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2dd658c0-4c8d-40a9-b3e5-dd3f07043621-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-w764w\" (UID: \"2dd658c0-4c8d-40a9-b3e5-dd3f07043621\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-w764w" Jul 11 00:29:33.427335 containerd[1580]: time="2025-07-11T00:29:33.427270497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vfqqz,Uid:4605b660-5179-40df-b8b0-7a98c3dea4ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e20a6c5df60a4b104842ad70e6e1dc07e311d4940a46cd351ae7c1dbf6728c9\"" Jul 11 00:29:33.428270 kubelet[2768]: E0711 00:29:33.428227 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:33.431974 containerd[1580]: time="2025-07-11T00:29:33.431895118Z" level=info msg="CreateContainer within sandbox \"0e20a6c5df60a4b104842ad70e6e1dc07e311d4940a46cd351ae7c1dbf6728c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:29:33.451803 containerd[1580]: time="2025-07-11T00:29:33.451737483Z" level=info msg="CreateContainer within sandbox \"0e20a6c5df60a4b104842ad70e6e1dc07e311d4940a46cd351ae7c1dbf6728c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1fbe470b740bc14da9dbb1763ed9ca215f34c1b7f320560cc66e6ef1b0e126d3\"" Jul 11 00:29:33.454273 containerd[1580]: time="2025-07-11T00:29:33.452499284Z" level=info msg="StartContainer for \"1fbe470b740bc14da9dbb1763ed9ca215f34c1b7f320560cc66e6ef1b0e126d3\"" Jul 11 00:29:33.525166 containerd[1580]: time="2025-07-11T00:29:33.525104364Z" level=info msg="StartContainer for \"1fbe470b740bc14da9dbb1763ed9ca215f34c1b7f320560cc66e6ef1b0e126d3\" returns successfully" Jul 11 00:29:33.655727 containerd[1580]: time="2025-07-11T00:29:33.655213683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-w764w,Uid:2dd658c0-4c8d-40a9-b3e5-dd3f07043621,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:29:33.686630 containerd[1580]: time="2025-07-11T00:29:33.686172608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:29:33.686630 containerd[1580]: time="2025-07-11T00:29:33.686242341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:29:33.686630 containerd[1580]: time="2025-07-11T00:29:33.686286785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:33.686630 containerd[1580]: time="2025-07-11T00:29:33.686510792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:33.755088 containerd[1580]: time="2025-07-11T00:29:33.755043634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-w764w,Uid:2dd658c0-4c8d-40a9-b3e5-dd3f07043621,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d17b91321604cec10014887439c640fb2ceed59fbfc9182e5428164c6b15d605\"" Jul 11 00:29:33.757280 containerd[1580]: time="2025-07-11T00:29:33.757236863Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:29:33.858335 kubelet[2768]: E0711 00:29:33.858280 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:33.870634 kubelet[2768]: I0711 00:29:33.870560 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vfqqz" podStartSLOduration=1.8705393209999999 podStartE2EDuration="1.870539321s" podCreationTimestamp="2025-07-11 00:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:29:33.869553062 +0000 UTC m=+5.175043207" watchObservedRunningTime="2025-07-11 00:29:33.870539321 +0000 UTC m=+5.176029466" Jul 11 00:29:35.327715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2911676534.mount: Deactivated successfully. Jul 11 00:29:35.698630 containerd[1580]: time="2025-07-11T00:29:35.698428905Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:35.700381 containerd[1580]: time="2025-07-11T00:29:35.700323602Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 00:29:35.702237 containerd[1580]: time="2025-07-11T00:29:35.702163685Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:35.705091 containerd[1580]: time="2025-07-11T00:29:35.704966681Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:35.705932 containerd[1580]: time="2025-07-11T00:29:35.705886272Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.948598433s" Jul 11 00:29:35.705932 containerd[1580]: time="2025-07-11T00:29:35.705940665Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 00:29:35.708644 containerd[1580]: time="2025-07-11T00:29:35.708603164Z" level=info msg="CreateContainer within sandbox \"d17b91321604cec10014887439c640fb2ceed59fbfc9182e5428164c6b15d605\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:29:35.726827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012548831.mount: Deactivated successfully. Jul 11 00:29:35.734860 containerd[1580]: time="2025-07-11T00:29:35.734808285Z" level=info msg="CreateContainer within sandbox \"d17b91321604cec10014887439c640fb2ceed59fbfc9182e5428164c6b15d605\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7ffe451e26d9316266bbfd0ea339100a6ce78531764d977d81a87a297d1eb582\"" Jul 11 00:29:35.735300 containerd[1580]: time="2025-07-11T00:29:35.735244415Z" level=info msg="StartContainer for \"7ffe451e26d9316266bbfd0ea339100a6ce78531764d977d81a87a297d1eb582\"" Jul 11 00:29:35.807417 containerd[1580]: time="2025-07-11T00:29:35.807350029Z" level=info msg="StartContainer for \"7ffe451e26d9316266bbfd0ea339100a6ce78531764d977d81a87a297d1eb582\" returns successfully" Jul 11 00:29:36.459983 kubelet[2768]: E0711 00:29:36.459914 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:36.473449 kubelet[2768]: I0711 00:29:36.473374 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-w764w" podStartSLOduration=1.5225107530000002 podStartE2EDuration="3.47334849s" podCreationTimestamp="2025-07-11 00:29:33 +0000 UTC" firstStartedPulling="2025-07-11 00:29:33.756319755 +0000 UTC m=+5.061809900" lastFinishedPulling="2025-07-11 00:29:35.707157492 +0000 UTC m=+7.012647637" observedRunningTime="2025-07-11 00:29:35.879305379 +0000 UTC m=+7.184795524" watchObservedRunningTime="2025-07-11 00:29:36.47334849 +0000 UTC m=+7.778838655" Jul 11 00:29:36.864859 kubelet[2768]: E0711 00:29:36.864823 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:37.743252 kubelet[2768]: E0711 00:29:37.742981 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:37.866637 kubelet[2768]: E0711 00:29:37.866596 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:38.868630 kubelet[2768]: E0711 00:29:38.868543 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:42.132117 sudo[1793]: pam_unix(sudo:session): session closed for user root Jul 11 00:29:42.141163 sshd[1786]: pam_unix(sshd:session): session closed for user core Jul 11 00:29:42.174764 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:45230.service: Deactivated successfully. Jul 11 00:29:42.178487 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:29:42.178654 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:29:42.179993 systemd-logind[1559]: Removed session 9. Jul 11 00:29:45.797220 kubelet[2768]: I0711 00:29:45.797133 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d039f457-a222-4605-8d03-455a34ed2830-typha-certs\") pod \"calico-typha-8fb7d6679-w2gds\" (UID: \"d039f457-a222-4605-8d03-455a34ed2830\") " pod="calico-system/calico-typha-8fb7d6679-w2gds" Jul 11 00:29:45.797220 kubelet[2768]: I0711 00:29:45.797202 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgz7g\" (UniqueName: \"kubernetes.io/projected/d039f457-a222-4605-8d03-455a34ed2830-kube-api-access-jgz7g\") pod \"calico-typha-8fb7d6679-w2gds\" (UID: \"d039f457-a222-4605-8d03-455a34ed2830\") " pod="calico-system/calico-typha-8fb7d6679-w2gds" Jul 11 00:29:45.797220 kubelet[2768]: I0711 00:29:45.797228 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d039f457-a222-4605-8d03-455a34ed2830-tigera-ca-bundle\") pod \"calico-typha-8fb7d6679-w2gds\" (UID: \"d039f457-a222-4605-8d03-455a34ed2830\") " pod="calico-system/calico-typha-8fb7d6679-w2gds" Jul 11 00:29:45.927761 kubelet[2768]: E0711 00:29:45.927373 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:45.931359 containerd[1580]: time="2025-07-11T00:29:45.931288104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8fb7d6679-w2gds,Uid:d039f457-a222-4605-8d03-455a34ed2830,Namespace:calico-system,Attempt:0,}" Jul 11 00:29:46.607450 kubelet[2768]: I0711 00:29:46.607400 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2703a1b8-a236-4eef-a7ab-11f7122d904c-policysync\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607450 kubelet[2768]: I0711 00:29:46.607451 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n7rz\" (UniqueName: \"kubernetes.io/projected/2703a1b8-a236-4eef-a7ab-11f7122d904c-kube-api-access-5n7rz\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607717 kubelet[2768]: I0711 00:29:46.607476 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2703a1b8-a236-4eef-a7ab-11f7122d904c-var-run-calico\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607717 kubelet[2768]: I0711 00:29:46.607503 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2703a1b8-a236-4eef-a7ab-11f7122d904c-cni-log-dir\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607717 kubelet[2768]: I0711 00:29:46.607525 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2703a1b8-a236-4eef-a7ab-11f7122d904c-tigera-ca-bundle\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607717 kubelet[2768]: I0711 00:29:46.607548 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2703a1b8-a236-4eef-a7ab-11f7122d904c-xtables-lock\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607717 kubelet[2768]: I0711 00:29:46.607572 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2703a1b8-a236-4eef-a7ab-11f7122d904c-cni-bin-dir\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607893 kubelet[2768]: I0711 00:29:46.607592 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2703a1b8-a236-4eef-a7ab-11f7122d904c-var-lib-calico\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607893 kubelet[2768]: I0711 00:29:46.607616 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2703a1b8-a236-4eef-a7ab-11f7122d904c-lib-modules\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607893 kubelet[2768]: I0711 00:29:46.607641 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2703a1b8-a236-4eef-a7ab-11f7122d904c-node-certs\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607893 kubelet[2768]: I0711 00:29:46.607667 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2703a1b8-a236-4eef-a7ab-11f7122d904c-cni-net-dir\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.607893 kubelet[2768]: I0711 00:29:46.607717 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2703a1b8-a236-4eef-a7ab-11f7122d904c-flexvol-driver-host\") pod \"calico-node-rlr5j\" (UID: \"2703a1b8-a236-4eef-a7ab-11f7122d904c\") " pod="calico-system/calico-node-rlr5j" Jul 11 00:29:46.617336 containerd[1580]: time="2025-07-11T00:29:46.616258412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:29:46.621266 containerd[1580]: time="2025-07-11T00:29:46.620579190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:29:46.621266 containerd[1580]: time="2025-07-11T00:29:46.620658802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:46.621266 containerd[1580]: time="2025-07-11T00:29:46.621255513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:46.715899 kubelet[2768]: E0711 00:29:46.715714 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.715899 kubelet[2768]: W0711 00:29:46.715757 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.715899 kubelet[2768]: E0711 00:29:46.715791 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.722412 containerd[1580]: time="2025-07-11T00:29:46.722340668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8fb7d6679-w2gds,Uid:d039f457-a222-4605-8d03-455a34ed2830,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e921eb3650bf3484d8584ca72a14a0d1d0d95776007b550d84ec53bad1879a4\"" Jul 11 00:29:46.723201 kubelet[2768]: E0711 00:29:46.723149 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:46.725597 containerd[1580]: time="2025-07-11T00:29:46.725540770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:29:46.810216 kubelet[2768]: E0711 00:29:46.810161 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.810216 kubelet[2768]: W0711 00:29:46.810194 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.810216 kubelet[2768]: E0711 00:29:46.810220 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.849731 kubelet[2768]: E0711 00:29:46.849102 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.849731 kubelet[2768]: W0711 00:29:46.849150 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.849731 kubelet[2768]: E0711 00:29:46.849181 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.873828 kubelet[2768]: E0711 00:29:46.872191 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:29:46.905765 kubelet[2768]: E0711 00:29:46.905720 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.905765 kubelet[2768]: W0711 00:29:46.905757 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.905765 kubelet[2768]: E0711 00:29:46.905785 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.907073 containerd[1580]: time="2025-07-11T00:29:46.907024971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rlr5j,Uid:2703a1b8-a236-4eef-a7ab-11f7122d904c,Namespace:calico-system,Attempt:0,}" Jul 11 00:29:46.908018 kubelet[2768]: E0711 00:29:46.907904 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.908018 kubelet[2768]: W0711 00:29:46.907930 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.908018 kubelet[2768]: E0711 00:29:46.907951 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.908807 kubelet[2768]: E0711 00:29:46.908326 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.908807 kubelet[2768]: W0711 00:29:46.908338 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.908807 kubelet[2768]: E0711 00:29:46.908350 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.910825 kubelet[2768]: E0711 00:29:46.910111 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.910825 kubelet[2768]: W0711 00:29:46.910132 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.910825 kubelet[2768]: E0711 00:29:46.910146 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.913409 kubelet[2768]: E0711 00:29:46.911887 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.913409 kubelet[2768]: W0711 00:29:46.911920 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.913409 kubelet[2768]: E0711 00:29:46.911952 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.913409 kubelet[2768]: E0711 00:29:46.912553 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.913409 kubelet[2768]: W0711 00:29:46.912565 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.913409 kubelet[2768]: E0711 00:29:46.912589 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.913409 kubelet[2768]: E0711 00:29:46.912868 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.913409 kubelet[2768]: W0711 00:29:46.912879 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.913409 kubelet[2768]: E0711 00:29:46.912955 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.913409 kubelet[2768]: E0711 00:29:46.913261 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.913704 kubelet[2768]: W0711 00:29:46.913271 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.913704 kubelet[2768]: E0711 00:29:46.913283 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.914010 kubelet[2768]: E0711 00:29:46.913963 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.914010 kubelet[2768]: W0711 00:29:46.913997 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.914010 kubelet[2768]: E0711 00:29:46.914014 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.914149 kubelet[2768]: I0711 00:29:46.914042 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/096ddfe1-2570-48e6-b110-9f8e8c0f803b-registration-dir\") pod \"csi-node-driver-pd2rq\" (UID: \"096ddfe1-2570-48e6-b110-9f8e8c0f803b\") " pod="calico-system/csi-node-driver-pd2rq" Jul 11 00:29:46.914507 kubelet[2768]: E0711 00:29:46.914475 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.914507 kubelet[2768]: W0711 00:29:46.914492 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.914597 kubelet[2768]: E0711 00:29:46.914510 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.914942 kubelet[2768]: E0711 00:29:46.914916 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.915078 kubelet[2768]: W0711 00:29:46.915028 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.915305 kubelet[2768]: E0711 00:29:46.915288 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.915535 kubelet[2768]: E0711 00:29:46.915520 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.915599 kubelet[2768]: W0711 00:29:46.915587 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.915699 kubelet[2768]: E0711 00:29:46.915660 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.915834 kubelet[2768]: I0711 00:29:46.915814 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/096ddfe1-2570-48e6-b110-9f8e8c0f803b-kubelet-dir\") pod \"csi-node-driver-pd2rq\" (UID: \"096ddfe1-2570-48e6-b110-9f8e8c0f803b\") " pod="calico-system/csi-node-driver-pd2rq" Jul 11 00:29:46.916143 kubelet[2768]: E0711 00:29:46.916130 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.916225 kubelet[2768]: W0711 00:29:46.916205 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.916332 kubelet[2768]: E0711 00:29:46.916284 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.916655 kubelet[2768]: E0711 00:29:46.916624 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.916655 kubelet[2768]: W0711 00:29:46.916638 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.916855 kubelet[2768]: E0711 00:29:46.916792 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.918143 kubelet[2768]: E0711 00:29:46.918015 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.918143 kubelet[2768]: W0711 00:29:46.918031 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.918143 kubelet[2768]: E0711 00:29:46.918049 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.920053 kubelet[2768]: E0711 00:29:46.919061 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.920053 kubelet[2768]: W0711 00:29:46.919072 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.920053 kubelet[2768]: E0711 00:29:46.919537 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.920997 kubelet[2768]: E0711 00:29:46.920925 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.920997 kubelet[2768]: W0711 00:29:46.920953 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.921096 kubelet[2768]: E0711 00:29:46.921020 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.923002 kubelet[2768]: E0711 00:29:46.922834 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.923002 kubelet[2768]: W0711 00:29:46.922857 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.923002 kubelet[2768]: E0711 00:29:46.922924 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.923925 kubelet[2768]: E0711 00:29:46.923907 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.923925 kubelet[2768]: W0711 00:29:46.923924 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.924031 kubelet[2768]: E0711 00:29:46.923943 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.924348 kubelet[2768]: E0711 00:29:46.924327 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.924348 kubelet[2768]: W0711 00:29:46.924341 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.924419 kubelet[2768]: E0711 00:29:46.924353 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.925699 kubelet[2768]: E0711 00:29:46.925665 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.925699 kubelet[2768]: W0711 00:29:46.925697 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.925808 kubelet[2768]: E0711 00:29:46.925711 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.926067 kubelet[2768]: E0711 00:29:46.926039 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.926067 kubelet[2768]: W0711 00:29:46.926054 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.926067 kubelet[2768]: E0711 00:29:46.926067 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.926454 kubelet[2768]: E0711 00:29:46.926430 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.926454 kubelet[2768]: W0711 00:29:46.926445 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.926523 kubelet[2768]: E0711 00:29:46.926456 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.927181 kubelet[2768]: E0711 00:29:46.927148 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.927181 kubelet[2768]: W0711 00:29:46.927162 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.927181 kubelet[2768]: E0711 00:29:46.927173 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.927464 kubelet[2768]: E0711 00:29:46.927438 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.927464 kubelet[2768]: W0711 00:29:46.927452 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.927464 kubelet[2768]: E0711 00:29:46.927464 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:46.927878 kubelet[2768]: E0711 00:29:46.927714 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:46.927878 kubelet[2768]: W0711 00:29:46.927832 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:46.927878 kubelet[2768]: E0711 00:29:46.927845 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.017297 kubelet[2768]: E0711 00:29:47.017243 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.017297 kubelet[2768]: W0711 00:29:47.017276 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.017297 kubelet[2768]: E0711 00:29:47.017306 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.017578 kubelet[2768]: E0711 00:29:47.017552 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.017578 kubelet[2768]: W0711 00:29:47.017565 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.017631 kubelet[2768]: E0711 00:29:47.017581 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.017850 kubelet[2768]: E0711 00:29:47.017823 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.017850 kubelet[2768]: W0711 00:29:47.017834 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.017850 kubelet[2768]: E0711 00:29:47.017848 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.017950 kubelet[2768]: I0711 00:29:47.017872 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/096ddfe1-2570-48e6-b110-9f8e8c0f803b-varrun\") pod \"csi-node-driver-pd2rq\" (UID: \"096ddfe1-2570-48e6-b110-9f8e8c0f803b\") " pod="calico-system/csi-node-driver-pd2rq" Jul 11 00:29:47.018149 kubelet[2768]: E0711 00:29:47.018115 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.018149 kubelet[2768]: W0711 00:29:47.018130 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.018149 kubelet[2768]: E0711 00:29:47.018148 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.018380 kubelet[2768]: E0711 00:29:47.018356 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.018380 kubelet[2768]: W0711 00:29:47.018368 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.018464 kubelet[2768]: E0711 00:29:47.018383 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.018617 kubelet[2768]: E0711 00:29:47.018598 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.018617 kubelet[2768]: W0711 00:29:47.018609 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.018704 kubelet[2768]: E0711 00:29:47.018634 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.018934 kubelet[2768]: E0711 00:29:47.018913 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.018934 kubelet[2768]: W0711 00:29:47.018924 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.019027 kubelet[2768]: E0711 00:29:47.018938 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.019201 kubelet[2768]: E0711 00:29:47.019177 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.019201 kubelet[2768]: W0711 00:29:47.019192 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.019265 kubelet[2768]: E0711 00:29:47.019207 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.019417 kubelet[2768]: E0711 00:29:47.019399 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.019417 kubelet[2768]: W0711 00:29:47.019409 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.019475 kubelet[2768]: E0711 00:29:47.019435 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.019589 kubelet[2768]: E0711 00:29:47.019575 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.019589 kubelet[2768]: W0711 00:29:47.019584 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.019632 kubelet[2768]: E0711 00:29:47.019604 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.019653 kubelet[2768]: I0711 00:29:47.019632 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/096ddfe1-2570-48e6-b110-9f8e8c0f803b-socket-dir\") pod \"csi-node-driver-pd2rq\" (UID: \"096ddfe1-2570-48e6-b110-9f8e8c0f803b\") " pod="calico-system/csi-node-driver-pd2rq" Jul 11 00:29:47.019828 kubelet[2768]: E0711 00:29:47.019808 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.019828 kubelet[2768]: W0711 00:29:47.019822 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.019914 kubelet[2768]: E0711 00:29:47.019849 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.020032 kubelet[2768]: E0711 00:29:47.020016 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.020032 kubelet[2768]: W0711 00:29:47.020025 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.020100 kubelet[2768]: E0711 00:29:47.020037 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.020263 kubelet[2768]: E0711 00:29:47.020249 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.020299 kubelet[2768]: W0711 00:29:47.020261 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.020299 kubelet[2768]: E0711 00:29:47.020276 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.020375 kubelet[2768]: I0711 00:29:47.020294 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p54t\" (UniqueName: \"kubernetes.io/projected/096ddfe1-2570-48e6-b110-9f8e8c0f803b-kube-api-access-8p54t\") pod \"csi-node-driver-pd2rq\" (UID: \"096ddfe1-2570-48e6-b110-9f8e8c0f803b\") " pod="calico-system/csi-node-driver-pd2rq" Jul 11 00:29:47.020502 kubelet[2768]: E0711 00:29:47.020487 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.020502 kubelet[2768]: W0711 00:29:47.020499 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.020570 kubelet[2768]: E0711 00:29:47.020512 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.020739 kubelet[2768]: E0711 00:29:47.020722 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.020739 kubelet[2768]: W0711 00:29:47.020734 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.020797 kubelet[2768]: E0711 00:29:47.020748 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.021020 kubelet[2768]: E0711 00:29:47.021003 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.021020 kubelet[2768]: W0711 00:29:47.021016 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.021111 kubelet[2768]: E0711 00:29:47.021032 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.021238 kubelet[2768]: E0711 00:29:47.021223 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.021238 kubelet[2768]: W0711 00:29:47.021236 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.021291 kubelet[2768]: E0711 00:29:47.021250 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.021458 kubelet[2768]: E0711 00:29:47.021446 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.021480 kubelet[2768]: W0711 00:29:47.021457 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.021480 kubelet[2768]: E0711 00:29:47.021467 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.021717 kubelet[2768]: E0711 00:29:47.021702 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.021717 kubelet[2768]: W0711 00:29:47.021716 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.021799 kubelet[2768]: E0711 00:29:47.021727 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.103062 containerd[1580]: time="2025-07-11T00:29:47.102619284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:29:47.103062 containerd[1580]: time="2025-07-11T00:29:47.102736866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:29:47.103062 containerd[1580]: time="2025-07-11T00:29:47.102758578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:47.103663 containerd[1580]: time="2025-07-11T00:29:47.103315023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:29:47.122494 kubelet[2768]: E0711 00:29:47.122268 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.122494 kubelet[2768]: W0711 00:29:47.122303 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.122494 kubelet[2768]: E0711 00:29:47.122331 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.124739 kubelet[2768]: E0711 00:29:47.124334 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.124739 kubelet[2768]: W0711 00:29:47.124359 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.124739 kubelet[2768]: E0711 00:29:47.124401 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.138238 kubelet[2768]: E0711 00:29:47.129210 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.138238 kubelet[2768]: W0711 00:29:47.129250 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.144481 kubelet[2768]: E0711 00:29:47.138710 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.147477 kubelet[2768]: E0711 00:29:47.146638 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.147644 kubelet[2768]: W0711 00:29:47.147625 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.147737 kubelet[2768]: E0711 00:29:47.147722 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.148956 kubelet[2768]: E0711 00:29:47.148938 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.149077 kubelet[2768]: W0711 00:29:47.149055 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.149875 kubelet[2768]: E0711 00:29:47.149854 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.150231 kubelet[2768]: E0711 00:29:47.150195 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.150352 kubelet[2768]: W0711 00:29:47.150333 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.150552 kubelet[2768]: E0711 00:29:47.150534 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.151477 kubelet[2768]: E0711 00:29:47.151462 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.151568 kubelet[2768]: W0711 00:29:47.151553 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.151724 kubelet[2768]: E0711 00:29:47.151700 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.153110 kubelet[2768]: E0711 00:29:47.153086 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.153376 kubelet[2768]: W0711 00:29:47.153357 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.153536 kubelet[2768]: E0711 00:29:47.153519 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.154192 kubelet[2768]: E0711 00:29:47.154177 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.154450 kubelet[2768]: W0711 00:29:47.154433 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.154626 kubelet[2768]: E0711 00:29:47.154611 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.154831 kubelet[2768]: E0711 00:29:47.154819 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.154920 kubelet[2768]: W0711 00:29:47.154906 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.155611 kubelet[2768]: E0711 00:29:47.155589 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.155883 kubelet[2768]: E0711 00:29:47.155868 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.155976 kubelet[2768]: W0711 00:29:47.155956 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.156333 kubelet[2768]: E0711 00:29:47.156316 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.157360 kubelet[2768]: E0711 00:29:47.157346 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.159092 kubelet[2768]: W0711 00:29:47.157494 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.159092 kubelet[2768]: E0711 00:29:47.157514 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.159092 kubelet[2768]: E0711 00:29:47.157813 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.159092 kubelet[2768]: W0711 00:29:47.157876 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.159092 kubelet[2768]: E0711 00:29:47.157976 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.159092 kubelet[2768]: E0711 00:29:47.158304 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.159092 kubelet[2768]: W0711 00:29:47.158315 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.159092 kubelet[2768]: E0711 00:29:47.158338 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.159092 kubelet[2768]: E0711 00:29:47.158668 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.159092 kubelet[2768]: W0711 00:29:47.158722 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.159378 kubelet[2768]: E0711 00:29:47.158737 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.179783 kubelet[2768]: E0711 00:29:47.178498 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:47.179783 kubelet[2768]: W0711 00:29:47.178532 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:47.179783 kubelet[2768]: E0711 00:29:47.178597 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:47.187346 containerd[1580]: time="2025-07-11T00:29:47.186839930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rlr5j,Uid:2703a1b8-a236-4eef-a7ab-11f7122d904c,Namespace:calico-system,Attempt:0,} returns sandbox id \"522e28d4402ab63be89af8ac7b434c2fece37351c1a9b90dd5a3fde05923a6fa\"" Jul 11 00:29:48.516555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273338861.mount: Deactivated successfully. Jul 11 00:29:48.828665 kubelet[2768]: E0711 00:29:48.828085 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:29:49.211861 containerd[1580]: time="2025-07-11T00:29:49.211816990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:49.262698 containerd[1580]: time="2025-07-11T00:29:49.262598650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 11 00:29:49.268784 containerd[1580]: time="2025-07-11T00:29:49.268709466Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:49.272549 containerd[1580]: time="2025-07-11T00:29:49.272499782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:49.273794 containerd[1580]: time="2025-07-11T00:29:49.273724585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.548132096s" Jul 11 00:29:49.273794 containerd[1580]: time="2025-07-11T00:29:49.273787834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 00:29:49.275549 containerd[1580]: time="2025-07-11T00:29:49.275499580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:29:49.287164 containerd[1580]: time="2025-07-11T00:29:49.287122265Z" level=info msg="CreateContainer within sandbox \"1e921eb3650bf3484d8584ca72a14a0d1d0d95776007b550d84ec53bad1879a4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:29:49.302126 containerd[1580]: time="2025-07-11T00:29:49.302046071Z" level=info msg="CreateContainer within sandbox \"1e921eb3650bf3484d8584ca72a14a0d1d0d95776007b550d84ec53bad1879a4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"70aea0bf5b35a77bac979d115c5fe839bcdbb11ca5731c5d99c3d06cd3608136\"" Jul 11 00:29:49.304633 containerd[1580]: time="2025-07-11T00:29:49.304367432Z" level=info msg="StartContainer for \"70aea0bf5b35a77bac979d115c5fe839bcdbb11ca5731c5d99c3d06cd3608136\"" Jul 11 00:29:49.491065 containerd[1580]: time="2025-07-11T00:29:49.490746448Z" level=info msg="StartContainer for \"70aea0bf5b35a77bac979d115c5fe839bcdbb11ca5731c5d99c3d06cd3608136\" returns successfully" Jul 11 00:29:49.909657 kubelet[2768]: E0711 00:29:49.909510 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:49.951684 kubelet[2768]: E0711 00:29:49.951616 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.951684 kubelet[2768]: W0711 00:29:49.951651 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.951684 kubelet[2768]: E0711 00:29:49.951695 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.952183 kubelet[2768]: E0711 00:29:49.952165 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.952183 kubelet[2768]: W0711 00:29:49.952180 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.952268 kubelet[2768]: E0711 00:29:49.952193 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.952619 kubelet[2768]: E0711 00:29:49.952549 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.952668 kubelet[2768]: W0711 00:29:49.952618 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.952668 kubelet[2768]: E0711 00:29:49.952632 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.953016 kubelet[2768]: E0711 00:29:49.952947 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.953016 kubelet[2768]: W0711 00:29:49.953006 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.953088 kubelet[2768]: E0711 00:29:49.953045 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.953535 kubelet[2768]: E0711 00:29:49.953502 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.953535 kubelet[2768]: W0711 00:29:49.953516 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.953535 kubelet[2768]: E0711 00:29:49.953525 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.953784 kubelet[2768]: E0711 00:29:49.953765 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.953784 kubelet[2768]: W0711 00:29:49.953779 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.953784 kubelet[2768]: E0711 00:29:49.953790 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.954071 kubelet[2768]: E0711 00:29:49.954021 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.954071 kubelet[2768]: W0711 00:29:49.954031 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.954071 kubelet[2768]: E0711 00:29:49.954041 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.954272 kubelet[2768]: E0711 00:29:49.954246 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.954272 kubelet[2768]: W0711 00:29:49.954258 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.954272 kubelet[2768]: E0711 00:29:49.954268 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.954534 kubelet[2768]: E0711 00:29:49.954505 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.954534 kubelet[2768]: W0711 00:29:49.954519 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.954534 kubelet[2768]: E0711 00:29:49.954530 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.954780 kubelet[2768]: E0711 00:29:49.954763 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.954780 kubelet[2768]: W0711 00:29:49.954775 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.954860 kubelet[2768]: E0711 00:29:49.954785 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.955031 kubelet[2768]: E0711 00:29:49.955014 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.955031 kubelet[2768]: W0711 00:29:49.955025 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.955128 kubelet[2768]: E0711 00:29:49.955034 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.955265 kubelet[2768]: E0711 00:29:49.955247 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.955265 kubelet[2768]: W0711 00:29:49.955259 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.955455 kubelet[2768]: E0711 00:29:49.955268 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.955519 kubelet[2768]: E0711 00:29:49.955500 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.955519 kubelet[2768]: W0711 00:29:49.955513 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.955592 kubelet[2768]: E0711 00:29:49.955523 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.956469 kubelet[2768]: E0711 00:29:49.956444 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.956469 kubelet[2768]: W0711 00:29:49.956461 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.956557 kubelet[2768]: E0711 00:29:49.956475 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.956749 kubelet[2768]: E0711 00:29:49.956731 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.956749 kubelet[2768]: W0711 00:29:49.956745 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.956812 kubelet[2768]: E0711 00:29:49.956757 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.970324 kubelet[2768]: E0711 00:29:49.970268 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.970324 kubelet[2768]: W0711 00:29:49.970296 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.970324 kubelet[2768]: E0711 00:29:49.970321 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.970594 kubelet[2768]: E0711 00:29:49.970569 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.970594 kubelet[2768]: W0711 00:29:49.970580 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.970648 kubelet[2768]: E0711 00:29:49.970599 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.971056 kubelet[2768]: E0711 00:29:49.971017 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.971056 kubelet[2768]: W0711 00:29:49.971046 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.971150 kubelet[2768]: E0711 00:29:49.971079 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.971358 kubelet[2768]: E0711 00:29:49.971344 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.971358 kubelet[2768]: W0711 00:29:49.971355 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.971417 kubelet[2768]: E0711 00:29:49.971369 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.971617 kubelet[2768]: E0711 00:29:49.971598 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.971617 kubelet[2768]: W0711 00:29:49.971613 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.971709 kubelet[2768]: E0711 00:29:49.971637 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.971950 kubelet[2768]: E0711 00:29:49.971925 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.971950 kubelet[2768]: W0711 00:29:49.971939 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.972023 kubelet[2768]: E0711 00:29:49.972000 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.972201 kubelet[2768]: E0711 00:29:49.972185 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.972201 kubelet[2768]: W0711 00:29:49.972198 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.972253 kubelet[2768]: E0711 00:29:49.972238 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.972473 kubelet[2768]: E0711 00:29:49.972454 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.972473 kubelet[2768]: W0711 00:29:49.972467 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.972542 kubelet[2768]: E0711 00:29:49.972483 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.972692 kubelet[2768]: E0711 00:29:49.972662 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.972692 kubelet[2768]: W0711 00:29:49.972684 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.972771 kubelet[2768]: E0711 00:29:49.972699 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.972924 kubelet[2768]: E0711 00:29:49.972907 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.972924 kubelet[2768]: W0711 00:29:49.972923 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.973007 kubelet[2768]: E0711 00:29:49.972937 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.973164 kubelet[2768]: E0711 00:29:49.973148 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.973164 kubelet[2768]: W0711 00:29:49.973159 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.973241 kubelet[2768]: E0711 00:29:49.973173 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.973443 kubelet[2768]: E0711 00:29:49.973426 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.973443 kubelet[2768]: W0711 00:29:49.973439 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.973504 kubelet[2768]: E0711 00:29:49.973454 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.973650 kubelet[2768]: E0711 00:29:49.973636 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.973650 kubelet[2768]: W0711 00:29:49.973646 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.973720 kubelet[2768]: E0711 00:29:49.973658 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.973896 kubelet[2768]: E0711 00:29:49.973879 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.973896 kubelet[2768]: W0711 00:29:49.973891 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.973973 kubelet[2768]: E0711 00:29:49.973904 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.974135 kubelet[2768]: E0711 00:29:49.974119 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.974170 kubelet[2768]: W0711 00:29:49.974141 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.974170 kubelet[2768]: E0711 00:29:49.974154 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.974509 kubelet[2768]: E0711 00:29:49.974487 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.974509 kubelet[2768]: W0711 00:29:49.974507 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.974571 kubelet[2768]: E0711 00:29:49.974526 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.974902 kubelet[2768]: E0711 00:29:49.974862 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.974902 kubelet[2768]: W0711 00:29:49.974883 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.974902 kubelet[2768]: E0711 00:29:49.974901 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:49.975176 kubelet[2768]: E0711 00:29:49.975145 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:49.975176 kubelet[2768]: W0711 00:29:49.975163 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:49.975176 kubelet[2768]: E0711 00:29:49.975177 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.196609 kubelet[2768]: I0711 00:29:50.196372 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8fb7d6679-w2gds" podStartSLOduration=2.646262332 podStartE2EDuration="5.196353794s" podCreationTimestamp="2025-07-11 00:29:45 +0000 UTC" firstStartedPulling="2025-07-11 00:29:46.725046212 +0000 UTC m=+18.030536357" lastFinishedPulling="2025-07-11 00:29:49.275137654 +0000 UTC m=+20.580627819" observedRunningTime="2025-07-11 00:29:50.195484025 +0000 UTC m=+21.500974170" watchObservedRunningTime="2025-07-11 00:29:50.196353794 +0000 UTC m=+21.501843949" Jul 11 00:29:50.828208 kubelet[2768]: E0711 00:29:50.828133 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:29:50.911473 kubelet[2768]: I0711 00:29:50.911416 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:29:50.912059 kubelet[2768]: E0711 00:29:50.911869 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:29:50.963615 kubelet[2768]: E0711 00:29:50.963558 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.963615 kubelet[2768]: W0711 00:29:50.963593 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.963615 kubelet[2768]: E0711 00:29:50.963624 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.964159 kubelet[2768]: E0711 00:29:50.964125 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.964219 kubelet[2768]: W0711 00:29:50.964158 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.964219 kubelet[2768]: E0711 00:29:50.964187 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.964530 kubelet[2768]: E0711 00:29:50.964514 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.964530 kubelet[2768]: W0711 00:29:50.964526 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.964600 kubelet[2768]: E0711 00:29:50.964536 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.964899 kubelet[2768]: E0711 00:29:50.964878 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.964930 kubelet[2768]: W0711 00:29:50.964899 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.964930 kubelet[2768]: E0711 00:29:50.964923 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.965244 kubelet[2768]: E0711 00:29:50.965215 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.965244 kubelet[2768]: W0711 00:29:50.965228 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.965244 kubelet[2768]: E0711 00:29:50.965244 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.965508 kubelet[2768]: E0711 00:29:50.965477 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.965563 kubelet[2768]: W0711 00:29:50.965515 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.965563 kubelet[2768]: E0711 00:29:50.965527 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.965754 kubelet[2768]: E0711 00:29:50.965737 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.965754 kubelet[2768]: W0711 00:29:50.965749 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.965848 kubelet[2768]: E0711 00:29:50.965758 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.965970 kubelet[2768]: E0711 00:29:50.965941 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.965970 kubelet[2768]: W0711 00:29:50.965963 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.965970 kubelet[2768]: E0711 00:29:50.965972 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.966197 kubelet[2768]: E0711 00:29:50.966173 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.966197 kubelet[2768]: W0711 00:29:50.966185 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.966197 kubelet[2768]: E0711 00:29:50.966194 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.966391 kubelet[2768]: E0711 00:29:50.966349 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.966391 kubelet[2768]: W0711 00:29:50.966357 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.966391 kubelet[2768]: E0711 00:29:50.966367 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.966559 kubelet[2768]: E0711 00:29:50.966544 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.966559 kubelet[2768]: W0711 00:29:50.966555 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.966625 kubelet[2768]: E0711 00:29:50.966564 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.966799 kubelet[2768]: E0711 00:29:50.966783 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.966799 kubelet[2768]: W0711 00:29:50.966793 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.966875 kubelet[2768]: E0711 00:29:50.966801 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.966999 kubelet[2768]: E0711 00:29:50.966984 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.966999 kubelet[2768]: W0711 00:29:50.966994 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.967081 kubelet[2768]: E0711 00:29:50.967011 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.967193 kubelet[2768]: E0711 00:29:50.967180 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.967193 kubelet[2768]: W0711 00:29:50.967190 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.967249 kubelet[2768]: E0711 00:29:50.967197 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.967444 kubelet[2768]: E0711 00:29:50.967429 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.967444 kubelet[2768]: W0711 00:29:50.967439 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.967503 kubelet[2768]: E0711 00:29:50.967448 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.980126 kubelet[2768]: E0711 00:29:50.980072 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.980126 kubelet[2768]: W0711 00:29:50.980107 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.980126 kubelet[2768]: E0711 00:29:50.980135 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.981295 kubelet[2768]: E0711 00:29:50.981215 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.981295 kubelet[2768]: W0711 00:29:50.981258 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.981407 kubelet[2768]: E0711 00:29:50.981299 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.981759 kubelet[2768]: E0711 00:29:50.981725 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.981759 kubelet[2768]: W0711 00:29:50.981743 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.981992 kubelet[2768]: E0711 00:29:50.981786 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.982161 kubelet[2768]: E0711 00:29:50.982129 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.982161 kubelet[2768]: W0711 00:29:50.982145 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.982161 kubelet[2768]: E0711 00:29:50.982160 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.982479 kubelet[2768]: E0711 00:29:50.982447 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.982479 kubelet[2768]: W0711 00:29:50.982463 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.982479 kubelet[2768]: E0711 00:29:50.982481 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.982754 kubelet[2768]: E0711 00:29:50.982734 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.982754 kubelet[2768]: W0711 00:29:50.982749 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.982820 kubelet[2768]: E0711 00:29:50.982765 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.983098 kubelet[2768]: E0711 00:29:50.983067 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.983098 kubelet[2768]: W0711 00:29:50.983082 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.983191 kubelet[2768]: E0711 00:29:50.983141 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.983360 kubelet[2768]: E0711 00:29:50.983328 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.983360 kubelet[2768]: W0711 00:29:50.983343 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.983434 kubelet[2768]: E0711 00:29:50.983390 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.983606 kubelet[2768]: E0711 00:29:50.983586 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.983606 kubelet[2768]: W0711 00:29:50.983599 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.983711 kubelet[2768]: E0711 00:29:50.983617 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.984069 kubelet[2768]: E0711 00:29:50.984028 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.984069 kubelet[2768]: W0711 00:29:50.984050 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.984152 kubelet[2768]: E0711 00:29:50.984076 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.984333 kubelet[2768]: E0711 00:29:50.984301 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.984333 kubelet[2768]: W0711 00:29:50.984316 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.984333 kubelet[2768]: E0711 00:29:50.984334 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.984578 kubelet[2768]: E0711 00:29:50.984560 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.984578 kubelet[2768]: W0711 00:29:50.984573 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.984648 kubelet[2768]: E0711 00:29:50.984588 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.984863 kubelet[2768]: E0711 00:29:50.984844 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.984863 kubelet[2768]: W0711 00:29:50.984857 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.984938 kubelet[2768]: E0711 00:29:50.984873 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.985224 kubelet[2768]: E0711 00:29:50.985204 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.985224 kubelet[2768]: W0711 00:29:50.985219 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.985297 kubelet[2768]: E0711 00:29:50.985236 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.985472 kubelet[2768]: E0711 00:29:50.985455 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.985472 kubelet[2768]: W0711 00:29:50.985469 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.985537 kubelet[2768]: E0711 00:29:50.985487 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.985754 kubelet[2768]: E0711 00:29:50.985737 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.985754 kubelet[2768]: W0711 00:29:50.985750 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.985816 kubelet[2768]: E0711 00:29:50.985766 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.986160 kubelet[2768]: E0711 00:29:50.986138 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.986160 kubelet[2768]: W0711 00:29:50.986155 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.986261 kubelet[2768]: E0711 00:29:50.986175 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:50.986436 kubelet[2768]: E0711 00:29:50.986416 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:29:50.986436 kubelet[2768]: W0711 00:29:50.986430 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:29:50.986494 kubelet[2768]: E0711 00:29:50.986442 2768 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:29:52.828003 kubelet[2768]: E0711 00:29:52.827940 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:29:53.132087 containerd[1580]: time="2025-07-11T00:29:53.131945721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:53.166063 containerd[1580]: time="2025-07-11T00:29:53.165941199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 11 00:29:53.222566 containerd[1580]: time="2025-07-11T00:29:53.222484287Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:53.291299 containerd[1580]: time="2025-07-11T00:29:53.291213881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:29:53.292174 containerd[1580]: time="2025-07-11T00:29:53.292131138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 4.016576243s" Jul 11 00:29:53.292313 containerd[1580]: time="2025-07-11T00:29:53.292278988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 00:29:53.295851 containerd[1580]: time="2025-07-11T00:29:53.295801370Z" level=info msg="CreateContainer within sandbox \"522e28d4402ab63be89af8ac7b434c2fece37351c1a9b90dd5a3fde05923a6fa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:29:53.470396 containerd[1580]: time="2025-07-11T00:29:53.469273647Z" level=info msg="CreateContainer within sandbox \"522e28d4402ab63be89af8ac7b434c2fece37351c1a9b90dd5a3fde05923a6fa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"74ea65dc55f776cff0d2dd07b0055c8d8cc1e6f9c03597537bf53e8eb4d5439d\"" Jul 11 00:29:53.471178 containerd[1580]: time="2025-07-11T00:29:53.471002181Z" level=info msg="StartContainer for \"74ea65dc55f776cff0d2dd07b0055c8d8cc1e6f9c03597537bf53e8eb4d5439d\"" Jul 11 00:29:54.311444 containerd[1580]: time="2025-07-11T00:29:54.311389514Z" level=info msg="StartContainer for \"74ea65dc55f776cff0d2dd07b0055c8d8cc1e6f9c03597537bf53e8eb4d5439d\" returns successfully" Jul 11 00:29:54.333312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74ea65dc55f776cff0d2dd07b0055c8d8cc1e6f9c03597537bf53e8eb4d5439d-rootfs.mount: Deactivated successfully. Jul 11 00:29:54.724985 containerd[1580]: time="2025-07-11T00:29:54.724898127Z" level=info msg="shim disconnected" id=74ea65dc55f776cff0d2dd07b0055c8d8cc1e6f9c03597537bf53e8eb4d5439d namespace=k8s.io Jul 11 00:29:54.724985 containerd[1580]: time="2025-07-11T00:29:54.724977366Z" level=warning msg="cleaning up after shim disconnected" id=74ea65dc55f776cff0d2dd07b0055c8d8cc1e6f9c03597537bf53e8eb4d5439d namespace=k8s.io Jul 11 00:29:54.724985 containerd[1580]: time="2025-07-11T00:29:54.724986583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:29:54.828030 kubelet[2768]: E0711 00:29:54.827957 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:29:55.317943 containerd[1580]: time="2025-07-11T00:29:55.317805976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:29:56.827979 kubelet[2768]: E0711 00:29:56.827868 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:29:58.828162 kubelet[2768]: E0711 00:29:58.828086 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:30:00.828109 kubelet[2768]: E0711 00:30:00.828042 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:30:02.712306 containerd[1580]: time="2025-07-11T00:30:02.712203971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:02.713875 containerd[1580]: time="2025-07-11T00:30:02.713705741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 00:30:02.715929 containerd[1580]: time="2025-07-11T00:30:02.715811724Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:02.721764 containerd[1580]: time="2025-07-11T00:30:02.721698407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:02.722742 containerd[1580]: time="2025-07-11T00:30:02.722636100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 7.404778225s" Jul 11 00:30:02.722742 containerd[1580]: time="2025-07-11T00:30:02.722716142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 00:30:02.730802 containerd[1580]: time="2025-07-11T00:30:02.729514078Z" level=info msg="CreateContainer within sandbox \"522e28d4402ab63be89af8ac7b434c2fece37351c1a9b90dd5a3fde05923a6fa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:30:02.827409 containerd[1580]: time="2025-07-11T00:30:02.827331873Z" level=info msg="CreateContainer within sandbox \"522e28d4402ab63be89af8ac7b434c2fece37351c1a9b90dd5a3fde05923a6fa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a05f6b5cd3c9eaf50de17bdb3dec03c510b2ed32d7ca48ad55539562f5a2b720\"" Jul 11 00:30:02.828592 containerd[1580]: time="2025-07-11T00:30:02.828428847Z" level=info msg="StartContainer for \"a05f6b5cd3c9eaf50de17bdb3dec03c510b2ed32d7ca48ad55539562f5a2b720\"" Jul 11 00:30:02.830380 kubelet[2768]: E0711 00:30:02.829862 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:30:02.924083 containerd[1580]: time="2025-07-11T00:30:02.924035690Z" level=info msg="StartContainer for \"a05f6b5cd3c9eaf50de17bdb3dec03c510b2ed32d7ca48ad55539562f5a2b720\" returns successfully" Jul 11 00:30:04.828589 kubelet[2768]: E0711 00:30:04.828511 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:30:05.233339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a05f6b5cd3c9eaf50de17bdb3dec03c510b2ed32d7ca48ad55539562f5a2b720-rootfs.mount: Deactivated successfully. Jul 11 00:30:05.240123 containerd[1580]: time="2025-07-11T00:30:05.240027963Z" level=info msg="shim disconnected" id=a05f6b5cd3c9eaf50de17bdb3dec03c510b2ed32d7ca48ad55539562f5a2b720 namespace=k8s.io Jul 11 00:30:05.240123 containerd[1580]: time="2025-07-11T00:30:05.240111071Z" level=warning msg="cleaning up after shim disconnected" id=a05f6b5cd3c9eaf50de17bdb3dec03c510b2ed32d7ca48ad55539562f5a2b720 namespace=k8s.io Jul 11 00:30:05.240123 containerd[1580]: time="2025-07-11T00:30:05.240124466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:30:05.253691 kubelet[2768]: I0711 00:30:05.253641 2768 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 00:30:05.399401 containerd[1580]: time="2025-07-11T00:30:05.399357876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:30:05.429953 kubelet[2768]: I0711 00:30:05.429598 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a68c6842-d9b5-465c-a1bb-818f4874e778-calico-apiserver-certs\") pod \"calico-apiserver-9bdd5cc8b-h8l9c\" (UID: \"a68c6842-d9b5-465c-a1bb-818f4874e778\") " pod="calico-apiserver/calico-apiserver-9bdd5cc8b-h8l9c" Jul 11 00:30:05.429953 kubelet[2768]: I0711 00:30:05.429665 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f15d712e-6e3d-479f-9309-711c53706f83-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-lld94\" (UID: \"f15d712e-6e3d-479f-9309-711c53706f83\") " pod="calico-system/goldmane-58fd7646b9-lld94" Jul 11 00:30:05.429953 kubelet[2768]: I0711 00:30:05.429759 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f15d712e-6e3d-479f-9309-711c53706f83-goldmane-key-pair\") pod \"goldmane-58fd7646b9-lld94\" (UID: \"f15d712e-6e3d-479f-9309-711c53706f83\") " pod="calico-system/goldmane-58fd7646b9-lld94" Jul 11 00:30:05.429953 kubelet[2768]: I0711 00:30:05.429928 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rfsq\" (UniqueName: \"kubernetes.io/projected/f15d712e-6e3d-479f-9309-711c53706f83-kube-api-access-7rfsq\") pod \"goldmane-58fd7646b9-lld94\" (UID: \"f15d712e-6e3d-479f-9309-711c53706f83\") " pod="calico-system/goldmane-58fd7646b9-lld94" Jul 11 00:30:05.430257 kubelet[2768]: I0711 00:30:05.430015 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8669555-3bbe-4e3a-b6d1-dde636ebecce-config-volume\") pod \"coredns-7c65d6cfc9-58z5c\" (UID: \"d8669555-3bbe-4e3a-b6d1-dde636ebecce\") " pod="kube-system/coredns-7c65d6cfc9-58z5c" Jul 11 00:30:05.430257 kubelet[2768]: I0711 00:30:05.430096 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rtzr\" (UniqueName: \"kubernetes.io/projected/d8669555-3bbe-4e3a-b6d1-dde636ebecce-kube-api-access-8rtzr\") pod \"coredns-7c65d6cfc9-58z5c\" (UID: \"d8669555-3bbe-4e3a-b6d1-dde636ebecce\") " pod="kube-system/coredns-7c65d6cfc9-58z5c" Jul 11 00:30:05.430257 kubelet[2768]: I0711 00:30:05.430154 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvnr8\" (UniqueName: \"kubernetes.io/projected/02706169-3e5d-4d0e-89ad-f0621f887573-kube-api-access-wvnr8\") pod \"coredns-7c65d6cfc9-6zs7m\" (UID: \"02706169-3e5d-4d0e-89ad-f0621f887573\") " pod="kube-system/coredns-7c65d6cfc9-6zs7m" Jul 11 00:30:05.430257 kubelet[2768]: I0711 00:30:05.430178 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/22cfc3aa-428e-4766-b882-d4df37109c6b-whisker-backend-key-pair\") pod \"whisker-858c6d685b-4rt4c\" (UID: \"22cfc3aa-428e-4766-b882-d4df37109c6b\") " pod="calico-system/whisker-858c6d685b-4rt4c" Jul 11 00:30:05.430257 kubelet[2768]: I0711 00:30:05.430214 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f15d712e-6e3d-479f-9309-711c53706f83-config\") pod \"goldmane-58fd7646b9-lld94\" (UID: \"f15d712e-6e3d-479f-9309-711c53706f83\") " pod="calico-system/goldmane-58fd7646b9-lld94" Jul 11 00:30:05.430442 kubelet[2768]: I0711 00:30:05.430233 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22cfc3aa-428e-4766-b882-d4df37109c6b-whisker-ca-bundle\") pod \"whisker-858c6d685b-4rt4c\" (UID: \"22cfc3aa-428e-4766-b882-d4df37109c6b\") " pod="calico-system/whisker-858c6d685b-4rt4c" Jul 11 00:30:05.430442 kubelet[2768]: I0711 00:30:05.430252 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63fd5fca-6d0d-445a-b193-a8d57e21492f-tigera-ca-bundle\") pod \"calico-kube-controllers-755966d9f5-pjj4v\" (UID: \"63fd5fca-6d0d-445a-b193-a8d57e21492f\") " pod="calico-system/calico-kube-controllers-755966d9f5-pjj4v" Jul 11 00:30:05.430442 kubelet[2768]: I0711 00:30:05.430273 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xjgb\" (UniqueName: \"kubernetes.io/projected/63fd5fca-6d0d-445a-b193-a8d57e21492f-kube-api-access-8xjgb\") pod \"calico-kube-controllers-755966d9f5-pjj4v\" (UID: \"63fd5fca-6d0d-445a-b193-a8d57e21492f\") " pod="calico-system/calico-kube-controllers-755966d9f5-pjj4v" Jul 11 00:30:05.430442 kubelet[2768]: I0711 00:30:05.430340 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02706169-3e5d-4d0e-89ad-f0621f887573-config-volume\") pod \"coredns-7c65d6cfc9-6zs7m\" (UID: \"02706169-3e5d-4d0e-89ad-f0621f887573\") " pod="kube-system/coredns-7c65d6cfc9-6zs7m" Jul 11 00:30:05.430442 kubelet[2768]: I0711 00:30:05.430361 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj8sz\" (UniqueName: \"kubernetes.io/projected/a68c6842-d9b5-465c-a1bb-818f4874e778-kube-api-access-hj8sz\") pod \"calico-apiserver-9bdd5cc8b-h8l9c\" (UID: \"a68c6842-d9b5-465c-a1bb-818f4874e778\") " pod="calico-apiserver/calico-apiserver-9bdd5cc8b-h8l9c" Jul 11 00:30:05.430607 kubelet[2768]: I0711 00:30:05.430403 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7jrl\" (UniqueName: \"kubernetes.io/projected/22cfc3aa-428e-4766-b882-d4df37109c6b-kube-api-access-w7jrl\") pod \"whisker-858c6d685b-4rt4c\" (UID: \"22cfc3aa-428e-4766-b882-d4df37109c6b\") " pod="calico-system/whisker-858c6d685b-4rt4c" Jul 11 00:30:05.531871 kubelet[2768]: I0711 00:30:05.530835 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv7xz\" (UniqueName: \"kubernetes.io/projected/e88b327d-5464-447f-98d8-ca6429e58f91-kube-api-access-rv7xz\") pod \"calico-apiserver-9bdd5cc8b-s5k2t\" (UID: \"e88b327d-5464-447f-98d8-ca6429e58f91\") " pod="calico-apiserver/calico-apiserver-9bdd5cc8b-s5k2t" Jul 11 00:30:05.531871 kubelet[2768]: I0711 00:30:05.530895 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e88b327d-5464-447f-98d8-ca6429e58f91-calico-apiserver-certs\") pod \"calico-apiserver-9bdd5cc8b-s5k2t\" (UID: \"e88b327d-5464-447f-98d8-ca6429e58f91\") " pod="calico-apiserver/calico-apiserver-9bdd5cc8b-s5k2t" Jul 11 00:30:05.605617 kubelet[2768]: E0711 00:30:05.605564 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:05.606591 containerd[1580]: time="2025-07-11T00:30:05.606532373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6zs7m,Uid:02706169-3e5d-4d0e-89ad-f0621f887573,Namespace:kube-system,Attempt:0,}" Jul 11 00:30:05.611934 kubelet[2768]: E0711 00:30:05.611844 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:05.612535 containerd[1580]: time="2025-07-11T00:30:05.612479785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58z5c,Uid:d8669555-3bbe-4e3a-b6d1-dde636ebecce,Namespace:kube-system,Attempt:0,}" Jul 11 00:30:05.626829 containerd[1580]: time="2025-07-11T00:30:05.626758808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bdd5cc8b-h8l9c,Uid:a68c6842-d9b5-465c-a1bb-818f4874e778,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:30:05.629823 containerd[1580]: time="2025-07-11T00:30:05.629747718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755966d9f5-pjj4v,Uid:63fd5fca-6d0d-445a-b193-a8d57e21492f,Namespace:calico-system,Attempt:0,}" Jul 11 00:30:05.638034 containerd[1580]: time="2025-07-11T00:30:05.637975501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-858c6d685b-4rt4c,Uid:22cfc3aa-428e-4766-b882-d4df37109c6b,Namespace:calico-system,Attempt:0,}" Jul 11 00:30:05.638316 containerd[1580]: time="2025-07-11T00:30:05.638292421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-lld94,Uid:f15d712e-6e3d-479f-9309-711c53706f83,Namespace:calico-system,Attempt:0,}" Jul 11 00:30:05.951945 containerd[1580]: time="2025-07-11T00:30:05.951813602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bdd5cc8b-s5k2t,Uid:e88b327d-5464-447f-98d8-ca6429e58f91,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:30:06.062952 containerd[1580]: time="2025-07-11T00:30:06.062739657Z" level=error msg="Failed to destroy network for sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.063286 containerd[1580]: time="2025-07-11T00:30:06.063200799Z" level=error msg="Failed to destroy network for sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.063592 containerd[1580]: time="2025-07-11T00:30:06.063431585Z" level=error msg="encountered an error cleaning up failed sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.063592 containerd[1580]: time="2025-07-11T00:30:06.063503481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58z5c,Uid:d8669555-3bbe-4e3a-b6d1-dde636ebecce,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.063901 containerd[1580]: time="2025-07-11T00:30:06.063733826Z" level=error msg="encountered an error cleaning up failed sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.063901 containerd[1580]: time="2025-07-11T00:30:06.063840338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6zs7m,Uid:02706169-3e5d-4d0e-89ad-f0621f887573,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.079233 kubelet[2768]: E0711 00:30:06.078940 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.079233 kubelet[2768]: E0711 00:30:06.079026 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6zs7m" Jul 11 00:30:06.079233 kubelet[2768]: E0711 00:30:06.079025 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.079233 kubelet[2768]: E0711 00:30:06.079052 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6zs7m" Jul 11 00:30:06.080217 kubelet[2768]: E0711 00:30:06.079069 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-58z5c" Jul 11 00:30:06.080217 kubelet[2768]: E0711 00:30:06.079091 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-58z5c" Jul 11 00:30:06.080217 kubelet[2768]: E0711 00:30:06.079105 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6zs7m_kube-system(02706169-3e5d-4d0e-89ad-f0621f887573)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6zs7m_kube-system(02706169-3e5d-4d0e-89ad-f0621f887573)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6zs7m" podUID="02706169-3e5d-4d0e-89ad-f0621f887573" Jul 11 00:30:06.080386 kubelet[2768]: E0711 00:30:06.079177 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-58z5c_kube-system(d8669555-3bbe-4e3a-b6d1-dde636ebecce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-58z5c_kube-system(d8669555-3bbe-4e3a-b6d1-dde636ebecce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-58z5c" podUID="d8669555-3bbe-4e3a-b6d1-dde636ebecce" Jul 11 00:30:06.198136 containerd[1580]: time="2025-07-11T00:30:06.198061019Z" level=error msg="Failed to destroy network for sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.198633 containerd[1580]: time="2025-07-11T00:30:06.198558810Z" level=error msg="encountered an error cleaning up failed sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.198741 containerd[1580]: time="2025-07-11T00:30:06.198620947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bdd5cc8b-h8l9c,Uid:a68c6842-d9b5-465c-a1bb-818f4874e778,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.199057 kubelet[2768]: E0711 00:30:06.198992 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.199259 kubelet[2768]: E0711 00:30:06.199067 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bdd5cc8b-h8l9c" Jul 11 00:30:06.199259 kubelet[2768]: E0711 00:30:06.199093 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bdd5cc8b-h8l9c" Jul 11 00:30:06.199259 kubelet[2768]: E0711 00:30:06.199157 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9bdd5cc8b-h8l9c_calico-apiserver(a68c6842-d9b5-465c-a1bb-818f4874e778)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9bdd5cc8b-h8l9c_calico-apiserver(a68c6842-d9b5-465c-a1bb-818f4874e778)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bdd5cc8b-h8l9c" podUID="a68c6842-d9b5-465c-a1bb-818f4874e778" Jul 11 00:30:06.212756 containerd[1580]: time="2025-07-11T00:30:06.212704146Z" level=error msg="Failed to destroy network for sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.213391 containerd[1580]: time="2025-07-11T00:30:06.213200735Z" level=error msg="encountered an error cleaning up failed sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.213391 containerd[1580]: time="2025-07-11T00:30:06.213264846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-858c6d685b-4rt4c,Uid:22cfc3aa-428e-4766-b882-d4df37109c6b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.213621 kubelet[2768]: E0711 00:30:06.213550 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.213713 kubelet[2768]: E0711 00:30:06.213640 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-858c6d685b-4rt4c" Jul 11 00:30:06.213889 kubelet[2768]: E0711 00:30:06.213850 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-858c6d685b-4rt4c" Jul 11 00:30:06.213968 kubelet[2768]: E0711 00:30:06.213924 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-858c6d685b-4rt4c_calico-system(22cfc3aa-428e-4766-b882-d4df37109c6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-858c6d685b-4rt4c_calico-system(22cfc3aa-428e-4766-b882-d4df37109c6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-858c6d685b-4rt4c" podUID="22cfc3aa-428e-4766-b882-d4df37109c6b" Jul 11 00:30:06.217217 containerd[1580]: time="2025-07-11T00:30:06.217175317Z" level=error msg="Failed to destroy network for sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.217715 containerd[1580]: time="2025-07-11T00:30:06.217654893Z" level=error msg="encountered an error cleaning up failed sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.217857 containerd[1580]: time="2025-07-11T00:30:06.217813093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-lld94,Uid:f15d712e-6e3d-479f-9309-711c53706f83,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.218091 kubelet[2768]: E0711 00:30:06.218053 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.218170 kubelet[2768]: E0711 00:30:06.218112 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-lld94" Jul 11 00:30:06.218170 kubelet[2768]: E0711 00:30:06.218140 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-lld94" Jul 11 00:30:06.218250 kubelet[2768]: E0711 00:30:06.218190 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-lld94_calico-system(f15d712e-6e3d-479f-9309-711c53706f83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-lld94_calico-system(f15d712e-6e3d-479f-9309-711c53706f83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-lld94" podUID="f15d712e-6e3d-479f-9309-711c53706f83" Jul 11 00:30:06.223224 containerd[1580]: time="2025-07-11T00:30:06.223168294Z" level=error msg="Failed to destroy network for sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.223591 containerd[1580]: time="2025-07-11T00:30:06.223562620Z" level=error msg="encountered an error cleaning up failed sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.223651 containerd[1580]: time="2025-07-11T00:30:06.223624717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bdd5cc8b-s5k2t,Uid:e88b327d-5464-447f-98d8-ca6429e58f91,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.223959 kubelet[2768]: E0711 00:30:06.223909 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.224049 kubelet[2768]: E0711 00:30:06.223987 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bdd5cc8b-s5k2t" Jul 11 00:30:06.224049 kubelet[2768]: E0711 00:30:06.224014 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bdd5cc8b-s5k2t" Jul 11 00:30:06.224144 kubelet[2768]: E0711 00:30:06.224069 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9bdd5cc8b-s5k2t_calico-apiserver(e88b327d-5464-447f-98d8-ca6429e58f91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9bdd5cc8b-s5k2t_calico-apiserver(e88b327d-5464-447f-98d8-ca6429e58f91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bdd5cc8b-s5k2t" podUID="e88b327d-5464-447f-98d8-ca6429e58f91" Jul 11 00:30:06.226130 containerd[1580]: time="2025-07-11T00:30:06.226069749Z" level=error msg="Failed to destroy network for sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.226549 containerd[1580]: time="2025-07-11T00:30:06.226510071Z" level=error msg="encountered an error cleaning up failed sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.226595 containerd[1580]: time="2025-07-11T00:30:06.226564704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755966d9f5-pjj4v,Uid:63fd5fca-6d0d-445a-b193-a8d57e21492f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.226883 kubelet[2768]: E0711 00:30:06.226837 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.226966 kubelet[2768]: E0711 00:30:06.226916 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-755966d9f5-pjj4v" Jul 11 00:30:06.226966 kubelet[2768]: E0711 00:30:06.226943 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-755966d9f5-pjj4v" Jul 11 00:30:06.227033 kubelet[2768]: E0711 00:30:06.226996 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-755966d9f5-pjj4v_calico-system(63fd5fca-6d0d-445a-b193-a8d57e21492f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-755966d9f5-pjj4v_calico-system(63fd5fca-6d0d-445a-b193-a8d57e21492f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-755966d9f5-pjj4v" podUID="63fd5fca-6d0d-445a-b193-a8d57e21492f" Jul 11 00:30:06.413047 kubelet[2768]: I0711 00:30:06.412998 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:06.414900 kubelet[2768]: I0711 00:30:06.414874 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:06.419415 kubelet[2768]: I0711 00:30:06.419357 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:06.424839 kubelet[2768]: I0711 00:30:06.424813 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:06.425449 containerd[1580]: time="2025-07-11T00:30:06.425412988Z" level=info msg="StopPodSandbox for \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\"" Jul 11 00:30:06.426824 containerd[1580]: time="2025-07-11T00:30:06.426786184Z" level=info msg="StopPodSandbox for \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\"" Jul 11 00:30:06.428917 containerd[1580]: time="2025-07-11T00:30:06.428258196Z" level=info msg="StopPodSandbox for \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\"" Jul 11 00:30:06.441318 containerd[1580]: time="2025-07-11T00:30:06.441247398Z" level=info msg="Ensure that sandbox 236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503 in task-service has been cleanup successfully" Jul 11 00:30:06.443527 containerd[1580]: time="2025-07-11T00:30:06.443470710Z" level=info msg="StopPodSandbox for \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\"" Jul 11 00:30:06.444419 containerd[1580]: time="2025-07-11T00:30:06.443745158Z" level=info msg="Ensure that sandbox ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f in task-service has been cleanup successfully" Jul 11 00:30:06.444504 kubelet[2768]: I0711 00:30:06.443887 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:06.444577 containerd[1580]: time="2025-07-11T00:30:06.444463727Z" level=info msg="Ensure that sandbox 8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037 in task-service has been cleanup successfully" Jul 11 00:30:06.444577 containerd[1580]: time="2025-07-11T00:30:06.444561591Z" level=info msg="StopPodSandbox for \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\"" Jul 11 00:30:06.445061 containerd[1580]: time="2025-07-11T00:30:06.444790383Z" level=info msg="Ensure that sandbox 85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37 in task-service has been cleanup successfully" Jul 11 00:30:06.445572 containerd[1580]: time="2025-07-11T00:30:06.445534480Z" level=info msg="Ensure that sandbox b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f in task-service has been cleanup successfully" Jul 11 00:30:06.448363 kubelet[2768]: I0711 00:30:06.448334 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:06.449515 containerd[1580]: time="2025-07-11T00:30:06.449454359Z" level=info msg="StopPodSandbox for \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\"" Jul 11 00:30:06.449788 containerd[1580]: time="2025-07-11T00:30:06.449670387Z" level=info msg="Ensure that sandbox 1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0 in task-service has been cleanup successfully" Jul 11 00:30:06.453086 kubelet[2768]: I0711 00:30:06.453041 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:06.453963 containerd[1580]: time="2025-07-11T00:30:06.453850789Z" level=info msg="StopPodSandbox for \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\"" Jul 11 00:30:06.454102 containerd[1580]: time="2025-07-11T00:30:06.454063431Z" level=info msg="Ensure that sandbox a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2 in task-service has been cleanup successfully" Jul 11 00:30:06.518615 containerd[1580]: time="2025-07-11T00:30:06.518446051Z" level=error msg="StopPodSandbox for \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\" failed" error="failed to destroy network for sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.519548 kubelet[2768]: E0711 00:30:06.518850 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:06.519548 kubelet[2768]: E0711 00:30:06.518940 2768 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037"} Jul 11 00:30:06.519548 kubelet[2768]: E0711 00:30:06.519014 2768 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f15d712e-6e3d-479f-9309-711c53706f83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:30:06.519548 kubelet[2768]: E0711 00:30:06.519052 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f15d712e-6e3d-479f-9309-711c53706f83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-lld94" podUID="f15d712e-6e3d-479f-9309-711c53706f83" Jul 11 00:30:06.524731 containerd[1580]: time="2025-07-11T00:30:06.524649305Z" level=error msg="StopPodSandbox for \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\" failed" error="failed to destroy network for sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.525096 containerd[1580]: time="2025-07-11T00:30:06.525042769Z" level=error msg="StopPodSandbox for \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\" failed" error="failed to destroy network for sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.525236 kubelet[2768]: E0711 00:30:06.525151 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:06.525236 kubelet[2768]: E0711 00:30:06.525220 2768 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2"} Jul 11 00:30:06.525433 kubelet[2768]: E0711 00:30:06.525265 2768 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22cfc3aa-428e-4766-b882-d4df37109c6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:30:06.525433 kubelet[2768]: E0711 00:30:06.525300 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22cfc3aa-428e-4766-b882-d4df37109c6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-858c6d685b-4rt4c" podUID="22cfc3aa-428e-4766-b882-d4df37109c6b" Jul 11 00:30:06.525433 kubelet[2768]: E0711 00:30:06.525335 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:06.525433 kubelet[2768]: E0711 00:30:06.525360 2768 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503"} Jul 11 00:30:06.525863 kubelet[2768]: E0711 00:30:06.525377 2768 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e88b327d-5464-447f-98d8-ca6429e58f91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:30:06.525863 kubelet[2768]: E0711 00:30:06.525393 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e88b327d-5464-447f-98d8-ca6429e58f91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bdd5cc8b-s5k2t" podUID="e88b327d-5464-447f-98d8-ca6429e58f91" Jul 11 00:30:06.528905 containerd[1580]: time="2025-07-11T00:30:06.528846679Z" level=error msg="StopPodSandbox for \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\" failed" error="failed to destroy network for sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.529159 kubelet[2768]: E0711 00:30:06.529086 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:06.529238 kubelet[2768]: E0711 00:30:06.529161 2768 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f"} Jul 11 00:30:06.529238 kubelet[2768]: E0711 00:30:06.529214 2768 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8669555-3bbe-4e3a-b6d1-dde636ebecce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:30:06.529340 kubelet[2768]: E0711 00:30:06.529242 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8669555-3bbe-4e3a-b6d1-dde636ebecce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-58z5c" podUID="d8669555-3bbe-4e3a-b6d1-dde636ebecce" Jul 11 00:30:06.530659 containerd[1580]: time="2025-07-11T00:30:06.530329331Z" level=error msg="StopPodSandbox for \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\" failed" error="failed to destroy network for sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.530737 kubelet[2768]: E0711 00:30:06.530483 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:06.530737 kubelet[2768]: E0711 00:30:06.530510 2768 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f"} Jul 11 00:30:06.530737 kubelet[2768]: E0711 00:30:06.530555 2768 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"02706169-3e5d-4d0e-89ad-f0621f887573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:30:06.530737 kubelet[2768]: E0711 00:30:06.530574 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"02706169-3e5d-4d0e-89ad-f0621f887573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6zs7m" podUID="02706169-3e5d-4d0e-89ad-f0621f887573" Jul 11 00:30:06.531815 containerd[1580]: time="2025-07-11T00:30:06.531758623Z" level=error msg="StopPodSandbox for \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\" failed" error="failed to destroy network for sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.531957 kubelet[2768]: E0711 00:30:06.531909 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:06.531957 kubelet[2768]: E0711 00:30:06.531937 2768 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0"} Jul 11 00:30:06.531957 kubelet[2768]: E0711 00:30:06.531957 2768 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63fd5fca-6d0d-445a-b193-a8d57e21492f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:30:06.532252 kubelet[2768]: E0711 00:30:06.531975 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63fd5fca-6d0d-445a-b193-a8d57e21492f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-755966d9f5-pjj4v" podUID="63fd5fca-6d0d-445a-b193-a8d57e21492f" Jul 11 00:30:06.538278 containerd[1580]: time="2025-07-11T00:30:06.538202693Z" level=error msg="StopPodSandbox for \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\" failed" error="failed to destroy network for sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.538501 kubelet[2768]: E0711 00:30:06.538464 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:06.538584 kubelet[2768]: E0711 00:30:06.538506 2768 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37"} Jul 11 00:30:06.538584 kubelet[2768]: E0711 00:30:06.538538 2768 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a68c6842-d9b5-465c-a1bb-818f4874e778\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:30:06.538584 kubelet[2768]: E0711 00:30:06.538558 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a68c6842-d9b5-465c-a1bb-818f4874e778\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bdd5cc8b-h8l9c" podUID="a68c6842-d9b5-465c-a1bb-818f4874e778" Jul 11 00:30:06.840913 containerd[1580]: time="2025-07-11T00:30:06.840294639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pd2rq,Uid:096ddfe1-2570-48e6-b110-9f8e8c0f803b,Namespace:calico-system,Attempt:0,}" Jul 11 00:30:06.941814 containerd[1580]: time="2025-07-11T00:30:06.941739741Z" level=error msg="Failed to destroy network for sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.942393 containerd[1580]: time="2025-07-11T00:30:06.942355665Z" level=error msg="encountered an error cleaning up failed sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.942454 containerd[1580]: time="2025-07-11T00:30:06.942422632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pd2rq,Uid:096ddfe1-2570-48e6-b110-9f8e8c0f803b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.943309 kubelet[2768]: E0711 00:30:06.943262 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:06.943974 kubelet[2768]: E0711 00:30:06.943512 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pd2rq" Jul 11 00:30:06.943974 kubelet[2768]: E0711 00:30:06.943551 2768 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pd2rq" Jul 11 00:30:06.943974 kubelet[2768]: E0711 00:30:06.943603 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pd2rq_calico-system(096ddfe1-2570-48e6-b110-9f8e8c0f803b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pd2rq_calico-system(096ddfe1-2570-48e6-b110-9f8e8c0f803b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:30:06.947163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e-shm.mount: Deactivated successfully. Jul 11 00:30:07.456343 kubelet[2768]: I0711 00:30:07.456298 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:07.457833 containerd[1580]: time="2025-07-11T00:30:07.457373357Z" level=info msg="StopPodSandbox for \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\"" Jul 11 00:30:07.457833 containerd[1580]: time="2025-07-11T00:30:07.457578204Z" level=info msg="Ensure that sandbox 50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e in task-service has been cleanup successfully" Jul 11 00:30:07.491869 containerd[1580]: time="2025-07-11T00:30:07.491795099Z" level=error msg="StopPodSandbox for \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\" failed" error="failed to destroy network for sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:30:07.492223 kubelet[2768]: E0711 00:30:07.492161 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:07.492288 kubelet[2768]: E0711 00:30:07.492243 2768 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e"} Jul 11 00:30:07.492320 kubelet[2768]: E0711 00:30:07.492298 2768 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"096ddfe1-2570-48e6-b110-9f8e8c0f803b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:30:07.492413 kubelet[2768]: E0711 00:30:07.492333 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"096ddfe1-2570-48e6-b110-9f8e8c0f803b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pd2rq" podUID="096ddfe1-2570-48e6-b110-9f8e8c0f803b" Jul 11 00:30:07.619308 kubelet[2768]: I0711 00:30:07.619247 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:30:07.619973 kubelet[2768]: E0711 00:30:07.619875 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:08.459221 kubelet[2768]: E0711 00:30:08.459166 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:12.798175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292733082.mount: Deactivated successfully. Jul 11 00:30:14.803701 containerd[1580]: time="2025-07-11T00:30:14.803553623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:14.818241 containerd[1580]: time="2025-07-11T00:30:14.817893580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 00:30:14.923431 containerd[1580]: time="2025-07-11T00:30:14.923346835Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:14.927464 containerd[1580]: time="2025-07-11T00:30:14.927416910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:14.928313 containerd[1580]: time="2025-07-11T00:30:14.928070434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 9.52866131s" Jul 11 00:30:14.928313 containerd[1580]: time="2025-07-11T00:30:14.928113585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 00:30:14.940633 containerd[1580]: time="2025-07-11T00:30:14.940582409Z" level=info msg="CreateContainer within sandbox \"522e28d4402ab63be89af8ac7b434c2fece37351c1a9b90dd5a3fde05923a6fa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:30:14.979511 containerd[1580]: time="2025-07-11T00:30:14.979422801Z" level=info msg="CreateContainer within sandbox \"522e28d4402ab63be89af8ac7b434c2fece37351c1a9b90dd5a3fde05923a6fa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a581d64f8da84e30444877c2ec58c05fb824f7d1eaf60b99e7abe35f9cd2d78f\"" Jul 11 00:30:14.980223 containerd[1580]: time="2025-07-11T00:30:14.980184570Z" level=info msg="StartContainer for \"a581d64f8da84e30444877c2ec58c05fb824f7d1eaf60b99e7abe35f9cd2d78f\"" Jul 11 00:30:15.101374 containerd[1580]: time="2025-07-11T00:30:15.101230277Z" level=info msg="StartContainer for \"a581d64f8da84e30444877c2ec58c05fb824f7d1eaf60b99e7abe35f9cd2d78f\" returns successfully" Jul 11 00:30:15.341927 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:30:15.343057 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:30:15.830778 kubelet[2768]: I0711 00:30:15.830232 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rlr5j" podStartSLOduration=2.089635916 podStartE2EDuration="29.830214664s" podCreationTimestamp="2025-07-11 00:29:46 +0000 UTC" firstStartedPulling="2025-07-11 00:29:47.188512664 +0000 UTC m=+18.494002809" lastFinishedPulling="2025-07-11 00:30:14.929091412 +0000 UTC m=+46.234581557" observedRunningTime="2025-07-11 00:30:15.829907183 +0000 UTC m=+47.135397328" watchObservedRunningTime="2025-07-11 00:30:15.830214664 +0000 UTC m=+47.135704809" Jul 11 00:30:15.986716 containerd[1580]: time="2025-07-11T00:30:15.984263169Z" level=info msg="StopPodSandbox for \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\"" Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.113 [INFO][4112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.116 [INFO][4112] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" iface="eth0" netns="/var/run/netns/cni-b466f2f8-c9eb-638f-2729-a87dc7eedb34" Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.117 [INFO][4112] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" iface="eth0" netns="/var/run/netns/cni-b466f2f8-c9eb-638f-2729-a87dc7eedb34" Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.118 [INFO][4112] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" iface="eth0" netns="/var/run/netns/cni-b466f2f8-c9eb-638f-2729-a87dc7eedb34" Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.118 [INFO][4112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.118 [INFO][4112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.646 [INFO][4122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" HandleID="k8s-pod-network.a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Workload="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.647 [INFO][4122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.648 [INFO][4122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.730 [WARNING][4122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" HandleID="k8s-pod-network.a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Workload="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.730 [INFO][4122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" HandleID="k8s-pod-network.a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Workload="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.732 [INFO][4122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:16.738768 containerd[1580]: 2025-07-11 00:30:16.735 [INFO][4112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:16.739186 containerd[1580]: time="2025-07-11T00:30:16.738976502Z" level=info msg="TearDown network for sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\" successfully" Jul 11 00:30:16.739186 containerd[1580]: time="2025-07-11T00:30:16.739008122Z" level=info msg="StopPodSandbox for \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\" returns successfully" Jul 11 00:30:16.742707 systemd[1]: run-netns-cni\x2db466f2f8\x2dc9eb\x2d638f\x2d2729\x2da87dc7eedb34.mount: Deactivated successfully. Jul 11 00:30:16.909628 kubelet[2768]: I0711 00:30:16.909544 2768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7jrl\" (UniqueName: \"kubernetes.io/projected/22cfc3aa-428e-4766-b882-d4df37109c6b-kube-api-access-w7jrl\") pod \"22cfc3aa-428e-4766-b882-d4df37109c6b\" (UID: \"22cfc3aa-428e-4766-b882-d4df37109c6b\") " Jul 11 00:30:16.909628 kubelet[2768]: I0711 00:30:16.909639 2768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/22cfc3aa-428e-4766-b882-d4df37109c6b-whisker-backend-key-pair\") pod \"22cfc3aa-428e-4766-b882-d4df37109c6b\" (UID: \"22cfc3aa-428e-4766-b882-d4df37109c6b\") " Jul 11 00:30:16.910179 kubelet[2768]: I0711 00:30:16.909659 2768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22cfc3aa-428e-4766-b882-d4df37109c6b-whisker-ca-bundle\") pod \"22cfc3aa-428e-4766-b882-d4df37109c6b\" (UID: \"22cfc3aa-428e-4766-b882-d4df37109c6b\") " Jul 11 00:30:16.910292 kubelet[2768]: I0711 00:30:16.910266 2768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22cfc3aa-428e-4766-b882-d4df37109c6b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "22cfc3aa-428e-4766-b882-d4df37109c6b" (UID: "22cfc3aa-428e-4766-b882-d4df37109c6b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:30:16.914590 kubelet[2768]: I0711 00:30:16.914544 2768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22cfc3aa-428e-4766-b882-d4df37109c6b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "22cfc3aa-428e-4766-b882-d4df37109c6b" (UID: "22cfc3aa-428e-4766-b882-d4df37109c6b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:30:16.914590 kubelet[2768]: I0711 00:30:16.914557 2768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22cfc3aa-428e-4766-b882-d4df37109c6b-kube-api-access-w7jrl" (OuterVolumeSpecName: "kube-api-access-w7jrl") pod "22cfc3aa-428e-4766-b882-d4df37109c6b" (UID: "22cfc3aa-428e-4766-b882-d4df37109c6b"). InnerVolumeSpecName "kube-api-access-w7jrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:30:16.916936 systemd[1]: var-lib-kubelet-pods-22cfc3aa\x2d428e\x2d4766\x2db882\x2dd4df37109c6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw7jrl.mount: Deactivated successfully. Jul 11 00:30:16.917121 systemd[1]: var-lib-kubelet-pods-22cfc3aa\x2d428e\x2d4766\x2db882\x2dd4df37109c6b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:30:17.010830 kubelet[2768]: I0711 00:30:17.010661 2768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7jrl\" (UniqueName: \"kubernetes.io/projected/22cfc3aa-428e-4766-b882-d4df37109c6b-kube-api-access-w7jrl\") on node \"localhost\" DevicePath \"\"" Jul 11 00:30:17.010830 kubelet[2768]: I0711 00:30:17.010722 2768 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/22cfc3aa-428e-4766-b882-d4df37109c6b-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:30:17.010830 kubelet[2768]: I0711 00:30:17.010736 2768 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22cfc3aa-428e-4766-b882-d4df37109c6b-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:30:18.522427 kubelet[2768]: I0711 00:30:18.522356 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4de31134-f603-404e-9eea-cd7bd1d6ce5d-whisker-backend-key-pair\") pod \"whisker-c5b9c6488-2x65p\" (UID: \"4de31134-f603-404e-9eea-cd7bd1d6ce5d\") " pod="calico-system/whisker-c5b9c6488-2x65p" Jul 11 00:30:18.522427 kubelet[2768]: I0711 00:30:18.522413 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4de31134-f603-404e-9eea-cd7bd1d6ce5d-whisker-ca-bundle\") pod \"whisker-c5b9c6488-2x65p\" (UID: \"4de31134-f603-404e-9eea-cd7bd1d6ce5d\") " pod="calico-system/whisker-c5b9c6488-2x65p" Jul 11 00:30:18.522427 kubelet[2768]: I0711 00:30:18.522439 2768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q558l\" (UniqueName: \"kubernetes.io/projected/4de31134-f603-404e-9eea-cd7bd1d6ce5d-kube-api-access-q558l\") pod \"whisker-c5b9c6488-2x65p\" (UID: \"4de31134-f603-404e-9eea-cd7bd1d6ce5d\") " pod="calico-system/whisker-c5b9c6488-2x65p" Jul 11 00:30:18.559715 kernel: bpftool[4290]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 11 00:30:18.670437 containerd[1580]: time="2025-07-11T00:30:18.670394772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5b9c6488-2x65p,Uid:4de31134-f603-404e-9eea-cd7bd1d6ce5d,Namespace:calico-system,Attempt:0,}" Jul 11 00:30:18.828797 containerd[1580]: time="2025-07-11T00:30:18.827991630Z" level=info msg="StopPodSandbox for \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\"" Jul 11 00:30:18.828797 containerd[1580]: time="2025-07-11T00:30:18.828041884Z" level=info msg="StopPodSandbox for \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\"" Jul 11 00:30:18.830702 kubelet[2768]: I0711 00:30:18.830654 2768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22cfc3aa-428e-4766-b882-d4df37109c6b" path="/var/lib/kubelet/pods/22cfc3aa-428e-4766-b882-d4df37109c6b/volumes" Jul 11 00:30:19.828957 containerd[1580]: time="2025-07-11T00:30:19.828866602Z" level=info msg="StopPodSandbox for \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\"" Jul 11 00:30:20.211009 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:59404.service - OpenSSH per-connection server daemon (10.0.0.1:59404). Jul 11 00:30:20.471109 systemd-networkd[1247]: vxlan.calico: Link UP Jul 11 00:30:20.471121 systemd-networkd[1247]: vxlan.calico: Gained carrier Jul 11 00:30:20.548437 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 59404 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:30:20.565194 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:30:20.616709 systemd-logind[1559]: New session 10 of user core. Jul 11 00:30:20.622239 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.524 [INFO][4356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.533 [INFO][4356] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" iface="eth0" netns="/var/run/netns/cni-fceaeb01-b701-c2f8-7fb5-82b87f8f269e" Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.536 [INFO][4356] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" iface="eth0" netns="/var/run/netns/cni-fceaeb01-b701-c2f8-7fb5-82b87f8f269e" Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.555 [INFO][4356] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" iface="eth0" netns="/var/run/netns/cni-fceaeb01-b701-c2f8-7fb5-82b87f8f269e" Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.555 [INFO][4356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.555 [INFO][4356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.690 [INFO][4399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" HandleID="k8s-pod-network.236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.690 [INFO][4399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.691 [INFO][4399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.755 [WARNING][4399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" HandleID="k8s-pod-network.236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.756 [INFO][4399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" HandleID="k8s-pod-network.236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.774 [INFO][4399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:20.794068 containerd[1580]: 2025-07-11 00:30:20.785 [INFO][4356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:20.797714 containerd[1580]: time="2025-07-11T00:30:20.795380560Z" level=info msg="TearDown network for sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\" successfully" Jul 11 00:30:20.797714 containerd[1580]: time="2025-07-11T00:30:20.795425495Z" level=info msg="StopPodSandbox for \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\" returns successfully" Jul 11 00:30:20.799390 containerd[1580]: time="2025-07-11T00:30:20.799015269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bdd5cc8b-s5k2t,Uid:e88b327d-5464-447f-98d8-ca6429e58f91,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.618 [INFO][4331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.619 [INFO][4331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" iface="eth0" netns="/var/run/netns/cni-f04207f5-7f0a-baeb-d39a-2704d306a1ec" Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.619 [INFO][4331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" iface="eth0" netns="/var/run/netns/cni-f04207f5-7f0a-baeb-d39a-2704d306a1ec" Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.621 [INFO][4331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" iface="eth0" netns="/var/run/netns/cni-f04207f5-7f0a-baeb-d39a-2704d306a1ec" Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.621 [INFO][4331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.621 [INFO][4331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.688 [INFO][4412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" HandleID="k8s-pod-network.1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.690 [INFO][4412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.774 [INFO][4412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.783 [WARNING][4412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" HandleID="k8s-pod-network.1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.783 [INFO][4412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" HandleID="k8s-pod-network.1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.784 [INFO][4412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:20.800060 containerd[1580]: 2025-07-11 00:30:20.792 [INFO][4331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:20.801266 systemd[1]: run-netns-cni\x2dfceaeb01\x2db701\x2dc2f8\x2d7fb5\x2d82b87f8f269e.mount: Deactivated successfully. Jul 11 00:30:20.802105 containerd[1580]: time="2025-07-11T00:30:20.802034618Z" level=info msg="TearDown network for sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\" successfully" Jul 11 00:30:20.802194 containerd[1580]: time="2025-07-11T00:30:20.802176295Z" level=info msg="StopPodSandbox for \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\" returns successfully" Jul 11 00:30:20.809729 containerd[1580]: time="2025-07-11T00:30:20.808576323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755966d9f5-pjj4v,Uid:63fd5fca-6d0d-445a-b193-a8d57e21492f,Namespace:calico-system,Attempt:1,}" Jul 11 00:30:20.810249 systemd[1]: run-netns-cni\x2df04207f5\x2d7f0a\x2dbaeb\x2dd39a\x2d2704d306a1ec.mount: Deactivated successfully. Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.640 [INFO][4332] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.641 [INFO][4332] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" iface="eth0" netns="/var/run/netns/cni-d0bcfffe-0025-2b5e-1cf5-8f39e3447fe5" Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.641 [INFO][4332] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" iface="eth0" netns="/var/run/netns/cni-d0bcfffe-0025-2b5e-1cf5-8f39e3447fe5" Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.641 [INFO][4332] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" iface="eth0" netns="/var/run/netns/cni-d0bcfffe-0025-2b5e-1cf5-8f39e3447fe5" Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.641 [INFO][4332] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.641 [INFO][4332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.754 [INFO][4427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" HandleID="k8s-pod-network.b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.755 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.785 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.798 [WARNING][4427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" HandleID="k8s-pod-network.b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.798 [INFO][4427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" HandleID="k8s-pod-network.b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.805 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:20.831014 containerd[1580]: 2025-07-11 00:30:20.820 [INFO][4332] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:20.833712 containerd[1580]: time="2025-07-11T00:30:20.831184328Z" level=info msg="StopPodSandbox for \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\"" Jul 11 00:30:20.833712 containerd[1580]: time="2025-07-11T00:30:20.832907189Z" level=info msg="TearDown network for sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\" successfully" Jul 11 00:30:20.833712 containerd[1580]: time="2025-07-11T00:30:20.832949540Z" level=info msg="StopPodSandbox for \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\" returns successfully" Jul 11 00:30:20.835509 systemd[1]: run-netns-cni\x2dd0bcfffe\x2d0025\x2d2b5e\x2d1cf5\x2d8f39e3447fe5.mount: Deactivated successfully. Jul 11 00:30:20.839722 kubelet[2768]: E0711 00:30:20.839630 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:20.847653 containerd[1580]: time="2025-07-11T00:30:20.845788579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58z5c,Uid:d8669555-3bbe-4e3a-b6d1-dde636ebecce,Namespace:kube-system,Attempt:1,}" Jul 11 00:30:20.847653 containerd[1580]: time="2025-07-11T00:30:20.846660264Z" level=info msg="StopPodSandbox for \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\"" Jul 11 00:30:21.141593 sshd[4363]: pam_unix(sshd:session): session closed for user core Jul 11 00:30:21.151270 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:59404.service: Deactivated successfully. Jul 11 00:30:21.159171 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:30:21.159232 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:30:21.165002 systemd-logind[1559]: Removed session 10. Jul 11 00:30:21.224312 systemd-networkd[1247]: cali11d45b7d074: Link UP Jul 11 00:30:21.226128 systemd-networkd[1247]: cali11d45b7d074: Gained carrier Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:20.755 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--c5b9c6488--2x65p-eth0 whisker-c5b9c6488- calico-system 4de31134-f603-404e-9eea-cd7bd1d6ce5d 978 0 2025-07-11 00:30:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c5b9c6488 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-c5b9c6488-2x65p eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali11d45b7d074 [] [] }} ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Namespace="calico-system" Pod="whisker-c5b9c6488-2x65p" WorkloadEndpoint="localhost-k8s-whisker--c5b9c6488--2x65p-" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:20.756 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Namespace="calico-system" Pod="whisker-c5b9c6488-2x65p" WorkloadEndpoint="localhost-k8s-whisker--c5b9c6488--2x65p-eth0" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:20.878 [INFO][4450] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" HandleID="k8s-pod-network.d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Workload="localhost-k8s-whisker--c5b9c6488--2x65p-eth0" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:20.881 [INFO][4450] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" HandleID="k8s-pod-network.d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Workload="localhost-k8s-whisker--c5b9c6488--2x65p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-c5b9c6488-2x65p", "timestamp":"2025-07-11 00:30:20.878826928 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:20.881 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:20.882 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:20.882 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:20.900 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" host="localhost" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:20.963 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:21.120 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:21.125 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:21.132 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:21.134 [INFO][4450] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" host="localhost" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:21.146 [INFO][4450] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83 Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:21.162 [INFO][4450] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" host="localhost" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:21.194 [INFO][4450] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" host="localhost" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:21.195 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" host="localhost" Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:21.195 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:21.286409 containerd[1580]: 2025-07-11 00:30:21.195 [INFO][4450] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" HandleID="k8s-pod-network.d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Workload="localhost-k8s-whisker--c5b9c6488--2x65p-eth0" Jul 11 00:30:21.287715 containerd[1580]: 2025-07-11 00:30:21.213 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Namespace="calico-system" Pod="whisker-c5b9c6488-2x65p" WorkloadEndpoint="localhost-k8s-whisker--c5b9c6488--2x65p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c5b9c6488--2x65p-eth0", GenerateName:"whisker-c5b9c6488-", Namespace:"calico-system", SelfLink:"", UID:"4de31134-f603-404e-9eea-cd7bd1d6ce5d", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 30, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c5b9c6488", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-c5b9c6488-2x65p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali11d45b7d074", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:21.287715 containerd[1580]: 2025-07-11 00:30:21.213 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Namespace="calico-system" Pod="whisker-c5b9c6488-2x65p" WorkloadEndpoint="localhost-k8s-whisker--c5b9c6488--2x65p-eth0" Jul 11 00:30:21.287715 containerd[1580]: 2025-07-11 00:30:21.213 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11d45b7d074 ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Namespace="calico-system" Pod="whisker-c5b9c6488-2x65p" WorkloadEndpoint="localhost-k8s-whisker--c5b9c6488--2x65p-eth0" Jul 11 00:30:21.287715 containerd[1580]: 2025-07-11 00:30:21.231 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Namespace="calico-system" Pod="whisker-c5b9c6488-2x65p" WorkloadEndpoint="localhost-k8s-whisker--c5b9c6488--2x65p-eth0" Jul 11 00:30:21.287715 containerd[1580]: 2025-07-11 00:30:21.232 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Namespace="calico-system" Pod="whisker-c5b9c6488-2x65p" WorkloadEndpoint="localhost-k8s-whisker--c5b9c6488--2x65p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c5b9c6488--2x65p-eth0", GenerateName:"whisker-c5b9c6488-", Namespace:"calico-system", SelfLink:"", UID:"4de31134-f603-404e-9eea-cd7bd1d6ce5d", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 30, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c5b9c6488", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83", Pod:"whisker-c5b9c6488-2x65p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali11d45b7d074", MAC:"06:24:3d:e6:3f:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:21.287715 containerd[1580]: 2025-07-11 00:30:21.278 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83" Namespace="calico-system" Pod="whisker-c5b9c6488-2x65p" WorkloadEndpoint="localhost-k8s-whisker--c5b9c6488--2x65p-eth0" Jul 11 00:30:21.300122 systemd-networkd[1247]: calid3769c44865: Link UP Jul 11 00:30:21.300463 systemd-networkd[1247]: calid3769c44865: Gained carrier Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.145 [INFO][4482] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.146 [INFO][4482] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" iface="eth0" netns="/var/run/netns/cni-cfa7f00f-f960-1d86-f5ce-a9d91fc0c2fd" Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.146 [INFO][4482] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" iface="eth0" netns="/var/run/netns/cni-cfa7f00f-f960-1d86-f5ce-a9d91fc0c2fd" Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.149 [INFO][4482] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" iface="eth0" netns="/var/run/netns/cni-cfa7f00f-f960-1d86-f5ce-a9d91fc0c2fd" Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.150 [INFO][4482] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.151 [INFO][4482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.206 [INFO][4586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" HandleID="k8s-pod-network.85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.207 [INFO][4586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.279 [INFO][4586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.291 [WARNING][4586] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" HandleID="k8s-pod-network.85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.291 [INFO][4586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" HandleID="k8s-pod-network.85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.294 [INFO][4586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:21.312666 containerd[1580]: 2025-07-11 00:30:21.298 [INFO][4482] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:21.312666 containerd[1580]: time="2025-07-11T00:30:21.310787311Z" level=info msg="TearDown network for sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\" successfully" Jul 11 00:30:21.312666 containerd[1580]: time="2025-07-11T00:30:21.310822608Z" level=info msg="StopPodSandbox for \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\" returns successfully" Jul 11 00:30:21.317328 containerd[1580]: time="2025-07-11T00:30:21.316909564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bdd5cc8b-h8l9c,Uid:a68c6842-d9b5-465c-a1bb-818f4874e778,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.122 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0 calico-kube-controllers-755966d9f5- calico-system 63fd5fca-6d0d-445a-b193-a8d57e21492f 1014 0 2025-07-11 00:29:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:755966d9f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-755966d9f5-pjj4v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid3769c44865 [] [] }} ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Namespace="calico-system" Pod="calico-kube-controllers-755966d9f5-pjj4v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.122 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Namespace="calico-system" Pod="calico-kube-controllers-755966d9f5-pjj4v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.197 [INFO][4579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" HandleID="k8s-pod-network.c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.202 [INFO][4579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" HandleID="k8s-pod-network.c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004edf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-755966d9f5-pjj4v", "timestamp":"2025-07-11 00:30:21.197661051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.203 [INFO][4579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.203 [INFO][4579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.203 [INFO][4579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.216 [INFO][4579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" host="localhost" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.227 [INFO][4579] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.239 [INFO][4579] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.243 [INFO][4579] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.246 [INFO][4579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.246 [INFO][4579] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" host="localhost" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.249 [INFO][4579] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.266 [INFO][4579] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" host="localhost" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.278 [INFO][4579] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" host="localhost" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.278 [INFO][4579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" host="localhost" Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.279 [INFO][4579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:21.321093 containerd[1580]: 2025-07-11 00:30:21.279 [INFO][4579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" HandleID="k8s-pod-network.c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:21.321616 containerd[1580]: 2025-07-11 00:30:21.293 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Namespace="calico-system" Pod="calico-kube-controllers-755966d9f5-pjj4v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0", GenerateName:"calico-kube-controllers-755966d9f5-", Namespace:"calico-system", SelfLink:"", UID:"63fd5fca-6d0d-445a-b193-a8d57e21492f", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"755966d9f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-755966d9f5-pjj4v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3769c44865", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:21.321616 containerd[1580]: 2025-07-11 00:30:21.293 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Namespace="calico-system" Pod="calico-kube-controllers-755966d9f5-pjj4v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:21.321616 containerd[1580]: 2025-07-11 00:30:21.293 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3769c44865 ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Namespace="calico-system" Pod="calico-kube-controllers-755966d9f5-pjj4v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:21.321616 containerd[1580]: 2025-07-11 00:30:21.297 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Namespace="calico-system" Pod="calico-kube-controllers-755966d9f5-pjj4v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:21.321616 containerd[1580]: 2025-07-11 00:30:21.297 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Namespace="calico-system" Pod="calico-kube-controllers-755966d9f5-pjj4v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0", GenerateName:"calico-kube-controllers-755966d9f5-", Namespace:"calico-system", SelfLink:"", UID:"63fd5fca-6d0d-445a-b193-a8d57e21492f", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"755966d9f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd", Pod:"calico-kube-controllers-755966d9f5-pjj4v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3769c44865", MAC:"1a:21:b4:7a:a6:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:21.321616 containerd[1580]: 2025-07-11 00:30:21.317 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd" Namespace="calico-system" Pod="calico-kube-controllers-755966d9f5-pjj4v" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:21.333192 containerd[1580]: time="2025-07-11T00:30:21.333056933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:30:21.333192 containerd[1580]: time="2025-07-11T00:30:21.333140080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:30:21.333192 containerd[1580]: time="2025-07-11T00:30:21.333155470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:21.334253 containerd[1580]: time="2025-07-11T00:30:21.333255499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:21.369795 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:30:21.401181 containerd[1580]: time="2025-07-11T00:30:21.401070416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5b9c6488-2x65p,Uid:4de31134-f603-404e-9eea-cd7bd1d6ce5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83\"" Jul 11 00:30:21.402839 containerd[1580]: time="2025-07-11T00:30:21.402803537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:30:21.432424 containerd[1580]: time="2025-07-11T00:30:21.431624182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:30:21.432424 containerd[1580]: time="2025-07-11T00:30:21.432391199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:30:21.432650 containerd[1580]: time="2025-07-11T00:30:21.432406557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:21.432650 containerd[1580]: time="2025-07-11T00:30:21.432542715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:21.466272 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:30:21.500699 containerd[1580]: time="2025-07-11T00:30:21.500631940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755966d9f5-pjj4v,Uid:63fd5fca-6d0d-445a-b193-a8d57e21492f,Namespace:calico-system,Attempt:1,} returns sandbox id \"c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd\"" Jul 11 00:30:21.555224 systemd-networkd[1247]: cali89d0291a004: Link UP Jul 11 00:30:21.555998 systemd-networkd[1247]: cali89d0291a004: Gained carrier Jul 11 00:30:21.566215 systemd[1]: run-netns-cni\x2dcfa7f00f\x2df960\x2d1d86\x2df5ce\x2da9d91fc0c2fd.mount: Deactivated successfully. Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.115 [INFO][4475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.115 [INFO][4475] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" iface="eth0" netns="/var/run/netns/cni-d3bcab36-9d77-2850-d5d3-3765493120a9" Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.115 [INFO][4475] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" iface="eth0" netns="/var/run/netns/cni-d3bcab36-9d77-2850-d5d3-3765493120a9" Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.116 [INFO][4475] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" iface="eth0" netns="/var/run/netns/cni-d3bcab36-9d77-2850-d5d3-3765493120a9" Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.116 [INFO][4475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.116 [INFO][4475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.226 [INFO][4557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" HandleID="k8s-pod-network.50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.234 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.533 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.546 [WARNING][4557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" HandleID="k8s-pod-network.50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.546 [INFO][4557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" HandleID="k8s-pod-network.50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.548 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:21.589589 containerd[1580]: 2025-07-11 00:30:21.576 [INFO][4475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.120 [INFO][4492] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0 calico-apiserver-9bdd5cc8b- calico-apiserver e88b327d-5464-447f-98d8-ca6429e58f91 1008 0 2025-07-11 00:29:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9bdd5cc8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9bdd5cc8b-s5k2t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89d0291a004 [] [] }} ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-s5k2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.120 [INFO][4492] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-s5k2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.204 [INFO][4577] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" HandleID="k8s-pod-network.50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.208 [INFO][4577] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" HandleID="k8s-pod-network.50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000289df0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9bdd5cc8b-s5k2t", "timestamp":"2025-07-11 00:30:21.204425315 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.208 [INFO][4577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.294 [INFO][4577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.294 [INFO][4577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.315 [INFO][4577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" host="localhost" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.479 [INFO][4577] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.486 [INFO][4577] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.489 [INFO][4577] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.491 [INFO][4577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.491 [INFO][4577] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" host="localhost" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.493 [INFO][4577] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2 Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.505 [INFO][4577] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" host="localhost" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.532 [INFO][4577] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" host="localhost" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.532 [INFO][4577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" host="localhost" Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.533 [INFO][4577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:21.590928 containerd[1580]: 2025-07-11 00:30:21.534 [INFO][4577] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" HandleID="k8s-pod-network.50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:21.593760 containerd[1580]: 2025-07-11 00:30:21.547 [INFO][4492] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-s5k2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0", GenerateName:"calico-apiserver-9bdd5cc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e88b327d-5464-447f-98d8-ca6429e58f91", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bdd5cc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9bdd5cc8b-s5k2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89d0291a004", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:21.593760 containerd[1580]: 2025-07-11 00:30:21.547 [INFO][4492] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-s5k2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:21.593760 containerd[1580]: 2025-07-11 00:30:21.547 [INFO][4492] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89d0291a004 ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-s5k2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:21.593760 containerd[1580]: 2025-07-11 00:30:21.555 [INFO][4492] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-s5k2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:21.593760 containerd[1580]: 2025-07-11 00:30:21.558 [INFO][4492] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-s5k2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0", GenerateName:"calico-apiserver-9bdd5cc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e88b327d-5464-447f-98d8-ca6429e58f91", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bdd5cc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2", Pod:"calico-apiserver-9bdd5cc8b-s5k2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89d0291a004", MAC:"12:17:8c:a5:81:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:21.593760 containerd[1580]: 2025-07-11 00:30:21.584 [INFO][4492] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-s5k2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:21.595540 containerd[1580]: time="2025-07-11T00:30:21.594067984Z" level=info msg="TearDown network for sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\" successfully" Jul 11 00:30:21.595540 containerd[1580]: time="2025-07-11T00:30:21.594103471Z" level=info msg="StopPodSandbox for \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\" returns successfully" Jul 11 00:30:21.595540 containerd[1580]: time="2025-07-11T00:30:21.594958975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pd2rq,Uid:096ddfe1-2570-48e6-b110-9f8e8c0f803b,Namespace:calico-system,Attempt:1,}" Jul 11 00:30:21.597415 systemd[1]: run-netns-cni\x2dd3bcab36\x2d9d77\x2d2850\x2dd5d3\x2d3765493120a9.mount: Deactivated successfully. Jul 11 00:30:21.829010 containerd[1580]: time="2025-07-11T00:30:21.828941512Z" level=info msg="StopPodSandbox for \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\"" Jul 11 00:30:21.829567 containerd[1580]: time="2025-07-11T00:30:21.829508393Z" level=info msg="StopPodSandbox for \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\"" Jul 11 00:30:21.856999 systemd-networkd[1247]: vxlan.calico: Gained IPv6LL Jul 11 00:30:21.881513 containerd[1580]: time="2025-07-11T00:30:21.881393179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:30:21.881513 containerd[1580]: time="2025-07-11T00:30:21.881456228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:30:21.881513 containerd[1580]: time="2025-07-11T00:30:21.881469153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:21.882173 containerd[1580]: time="2025-07-11T00:30:21.881596583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:21.909086 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:30:21.944580 containerd[1580]: time="2025-07-11T00:30:21.944500313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bdd5cc8b-s5k2t,Uid:e88b327d-5464-447f-98d8-ca6429e58f91,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2\"" Jul 11 00:30:21.989712 systemd-networkd[1247]: cali03076fb1967: Link UP Jul 11 00:30:21.991878 systemd-networkd[1247]: cali03076fb1967: Gained carrier Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.225 [INFO][4562] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0 coredns-7c65d6cfc9- kube-system d8669555-3bbe-4e3a-b6d1-dde636ebecce 1015 0 2025-07-11 00:29:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-58z5c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali03076fb1967 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58z5c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58z5c-" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.225 [INFO][4562] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58z5c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.287 [INFO][4612] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" HandleID="k8s-pod-network.2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.287 [INFO][4612] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" HandleID="k8s-pod-network.2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000399c00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-58z5c", "timestamp":"2025-07-11 00:30:21.287360394 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.287 [INFO][4612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.548 [INFO][4612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.549 [INFO][4612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.561 [INFO][4612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" host="localhost" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.583 [INFO][4612] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.595 [INFO][4612] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.598 [INFO][4612] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.611 [INFO][4612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.613 [INFO][4612] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" host="localhost" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.618 [INFO][4612] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7 Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.723 [INFO][4612] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" host="localhost" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.980 [INFO][4612] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" host="localhost" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.980 [INFO][4612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" host="localhost" Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.980 [INFO][4612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:22.195603 containerd[1580]: 2025-07-11 00:30:21.980 [INFO][4612] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" HandleID="k8s-pod-network.2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:22.196247 containerd[1580]: 2025-07-11 00:30:21.986 [INFO][4562] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58z5c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d8669555-3bbe-4e3a-b6d1-dde636ebecce", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-58z5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03076fb1967", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:22.196247 containerd[1580]: 2025-07-11 00:30:21.986 [INFO][4562] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58z5c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:22.196247 containerd[1580]: 2025-07-11 00:30:21.986 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali03076fb1967 ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58z5c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:22.196247 containerd[1580]: 2025-07-11 00:30:21.990 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58z5c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:22.196247 containerd[1580]: 2025-07-11 00:30:21.992 [INFO][4562] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58z5c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d8669555-3bbe-4e3a-b6d1-dde636ebecce", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7", Pod:"coredns-7c65d6cfc9-58z5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03076fb1967", MAC:"ce:4d:56:39:69:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:22.196247 containerd[1580]: 2025-07-11 00:30:22.191 [INFO][4562] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-58z5c" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:22.358375 containerd[1580]: time="2025-07-11T00:30:22.358268861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:30:22.358375 containerd[1580]: time="2025-07-11T00:30:22.358350576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:30:22.358375 containerd[1580]: time="2025-07-11T00:30:22.358363911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:22.358644 containerd[1580]: time="2025-07-11T00:30:22.358497473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:22.395937 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:30:22.412468 systemd-networkd[1247]: cali45a8ff2888e: Link UP Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.187 [INFO][4772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.188 [INFO][4772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" iface="eth0" netns="/var/run/netns/cni-4217cda7-4f1f-3a84-bb61-f7cdc867b5fe" Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.189 [INFO][4772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" iface="eth0" netns="/var/run/netns/cni-4217cda7-4f1f-3a84-bb61-f7cdc867b5fe" Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.190 [INFO][4772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" iface="eth0" netns="/var/run/netns/cni-4217cda7-4f1f-3a84-bb61-f7cdc867b5fe" Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.190 [INFO][4772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.190 [INFO][4772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.225 [INFO][4841] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" HandleID="k8s-pod-network.ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.225 [INFO][4841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.392 [INFO][4841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.399 [WARNING][4841] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" HandleID="k8s-pod-network.ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.399 [INFO][4841] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" HandleID="k8s-pod-network.ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.401 [INFO][4841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:22.418848 containerd[1580]: 2025-07-11 00:30:22.405 [INFO][4772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:22.417273 systemd-networkd[1247]: cali45a8ff2888e: Gained carrier Jul 11 00:30:22.421609 containerd[1580]: time="2025-07-11T00:30:22.420352875Z" level=info msg="TearDown network for sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\" successfully" Jul 11 00:30:22.421609 containerd[1580]: time="2025-07-11T00:30:22.420400484Z" level=info msg="StopPodSandbox for \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\" returns successfully" Jul 11 00:30:22.421872 kubelet[2768]: E0711 00:30:22.420918 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:22.429569 containerd[1580]: time="2025-07-11T00:30:22.428881477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6zs7m,Uid:02706169-3e5d-4d0e-89ad-f0621f887573,Namespace:kube-system,Attempt:1,}" Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.188 [INFO][4773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.188 [INFO][4773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" iface="eth0" netns="/var/run/netns/cni-7e866d10-4b9b-c83a-abf5-5a8103b6392a" Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.188 [INFO][4773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" iface="eth0" netns="/var/run/netns/cni-7e866d10-4b9b-c83a-abf5-5a8103b6392a" Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.190 [INFO][4773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" iface="eth0" netns="/var/run/netns/cni-7e866d10-4b9b-c83a-abf5-5a8103b6392a" Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.190 [INFO][4773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.190 [INFO][4773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.244 [INFO][4840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" HandleID="k8s-pod-network.8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.244 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.401 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.417 [WARNING][4840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" HandleID="k8s-pod-network.8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.417 [INFO][4840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" HandleID="k8s-pod-network.8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.424 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:22.455305 containerd[1580]: 2025-07-11 00:30:22.446 [INFO][4773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:22.455305 containerd[1580]: time="2025-07-11T00:30:22.452857326Z" level=info msg="TearDown network for sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\" successfully" Jul 11 00:30:22.455305 containerd[1580]: time="2025-07-11T00:30:22.453015303Z" level=info msg="StopPodSandbox for \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\" returns successfully" Jul 11 00:30:22.464008 containerd[1580]: time="2025-07-11T00:30:22.462313158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-lld94,Uid:f15d712e-6e3d-479f-9309-711c53706f83,Namespace:calico-system,Attempt:1,}" Jul 11 00:30:22.476600 containerd[1580]: time="2025-07-11T00:30:22.476497501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-58z5c,Uid:d8669555-3bbe-4e3a-b6d1-dde636ebecce,Namespace:kube-system,Attempt:1,} returns sandbox id \"2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7\"" Jul 11 00:30:22.478568 kubelet[2768]: E0711 00:30:22.478536 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:21.727 [INFO][4728] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0 calico-apiserver-9bdd5cc8b- calico-apiserver a68c6842-d9b5-465c-a1bb-818f4874e778 1021 0 2025-07-11 00:29:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9bdd5cc8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9bdd5cc8b-h8l9c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali45a8ff2888e [] [] }} ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-h8l9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:21.728 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-h8l9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.030 [INFO][4831] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" HandleID="k8s-pod-network.663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.030 [INFO][4831] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" HandleID="k8s-pod-network.663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9bdd5cc8b-h8l9c", "timestamp":"2025-07-11 00:30:22.03073805 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.031 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.031 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.031 [INFO][4831] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.189 [INFO][4831] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" host="localhost" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.216 [INFO][4831] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.223 [INFO][4831] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.226 [INFO][4831] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.228 [INFO][4831] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.229 [INFO][4831] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" host="localhost" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.232 [INFO][4831] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.250 [INFO][4831] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" host="localhost" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.391 [INFO][4831] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" host="localhost" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.391 [INFO][4831] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" host="localhost" Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.392 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:22.478815 containerd[1580]: 2025-07-11 00:30:22.392 [INFO][4831] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" HandleID="k8s-pod-network.663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:22.480413 containerd[1580]: 2025-07-11 00:30:22.407 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-h8l9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0", GenerateName:"calico-apiserver-9bdd5cc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a68c6842-d9b5-465c-a1bb-818f4874e778", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bdd5cc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9bdd5cc8b-h8l9c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45a8ff2888e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:22.480413 containerd[1580]: 2025-07-11 00:30:22.407 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-h8l9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:22.480413 containerd[1580]: 2025-07-11 00:30:22.407 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45a8ff2888e ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-h8l9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:22.480413 containerd[1580]: 2025-07-11 00:30:22.420 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-h8l9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:22.480413 containerd[1580]: 2025-07-11 00:30:22.433 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-h8l9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0", GenerateName:"calico-apiserver-9bdd5cc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a68c6842-d9b5-465c-a1bb-818f4874e778", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bdd5cc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b", Pod:"calico-apiserver-9bdd5cc8b-h8l9c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45a8ff2888e", MAC:"a6:0e:5e:ff:7c:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:22.480413 containerd[1580]: 2025-07-11 00:30:22.460 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b" Namespace="calico-apiserver" Pod="calico-apiserver-9bdd5cc8b-h8l9c" WorkloadEndpoint="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:22.483469 containerd[1580]: time="2025-07-11T00:30:22.482982357Z" level=info msg="CreateContainer within sandbox \"2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:30:22.532732 containerd[1580]: time="2025-07-11T00:30:22.532311905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:30:22.532732 containerd[1580]: time="2025-07-11T00:30:22.532416011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:30:22.532732 containerd[1580]: time="2025-07-11T00:30:22.532431340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:22.532732 containerd[1580]: time="2025-07-11T00:30:22.532606030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:22.577925 containerd[1580]: time="2025-07-11T00:30:22.577860678Z" level=info msg="CreateContainer within sandbox \"2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"292c52db9b6e176b75877411f5256e8a8fdf2970db58c02bf2020d06b8582567\"" Jul 11 00:30:22.580063 containerd[1580]: time="2025-07-11T00:30:22.579820567Z" level=info msg="StartContainer for \"292c52db9b6e176b75877411f5256e8a8fdf2970db58c02bf2020d06b8582567\"" Jul 11 00:30:22.589847 systemd[1]: run-netns-cni\x2d7e866d10\x2d4b9b\x2dc83a\x2dabf5\x2d5a8103b6392a.mount: Deactivated successfully. Jul 11 00:30:22.590091 systemd[1]: run-netns-cni\x2d4217cda7\x2d4f1f\x2d3a84\x2dbb61\x2df7cdc867b5fe.mount: Deactivated successfully. Jul 11 00:30:22.603323 systemd[1]: run-containerd-runc-k8s.io-663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b-runc.HrLYzm.mount: Deactivated successfully. Jul 11 00:30:22.605881 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:30:22.607589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622374187.mount: Deactivated successfully. Jul 11 00:30:22.690239 systemd-networkd[1247]: cali11d45b7d074: Gained IPv6LL Jul 11 00:30:22.691303 systemd-networkd[1247]: calid3769c44865: Gained IPv6LL Jul 11 00:30:22.715822 systemd-networkd[1247]: cali99b0732f781: Link UP Jul 11 00:30:22.717909 systemd-networkd[1247]: cali99b0732f781: Gained carrier Jul 11 00:30:22.721638 containerd[1580]: time="2025-07-11T00:30:22.721579014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bdd5cc8b-h8l9c,Uid:a68c6842-d9b5-465c-a1bb-818f4874e778,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b\"" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.509 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pd2rq-eth0 csi-node-driver- calico-system 096ddfe1-2570-48e6-b110-9f8e8c0f803b 1020 0 2025-07-11 00:29:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pd2rq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali99b0732f781 [] [] }} ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Namespace="calico-system" Pod="csi-node-driver-pd2rq" WorkloadEndpoint="localhost-k8s-csi--node--driver--pd2rq-" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.510 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Namespace="calico-system" Pod="csi-node-driver-pd2rq" WorkloadEndpoint="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.585 [INFO][4946] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" HandleID="k8s-pod-network.7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.585 [INFO][4946] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" HandleID="k8s-pod-network.7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pd2rq", "timestamp":"2025-07-11 00:30:22.585226967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.586 [INFO][4946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.586 [INFO][4946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.586 [INFO][4946] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.601 [INFO][4946] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" host="localhost" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.618 [INFO][4946] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.636 [INFO][4946] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.641 [INFO][4946] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.645 [INFO][4946] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.645 [INFO][4946] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" host="localhost" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.648 [INFO][4946] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538 Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.658 [INFO][4946] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" host="localhost" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.679 [INFO][4946] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" host="localhost" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.680 [INFO][4946] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" host="localhost" Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.680 [INFO][4946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:22.798163 containerd[1580]: 2025-07-11 00:30:22.680 [INFO][4946] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" HandleID="k8s-pod-network.7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:22.799642 containerd[1580]: 2025-07-11 00:30:22.697 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Namespace="calico-system" Pod="csi-node-driver-pd2rq" WorkloadEndpoint="localhost-k8s-csi--node--driver--pd2rq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pd2rq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"096ddfe1-2570-48e6-b110-9f8e8c0f803b", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pd2rq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99b0732f781", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:22.799642 containerd[1580]: 2025-07-11 00:30:22.697 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Namespace="calico-system" Pod="csi-node-driver-pd2rq" WorkloadEndpoint="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:22.799642 containerd[1580]: 2025-07-11 00:30:22.697 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99b0732f781 ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Namespace="calico-system" Pod="csi-node-driver-pd2rq" WorkloadEndpoint="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:22.799642 containerd[1580]: 2025-07-11 00:30:22.718 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Namespace="calico-system" Pod="csi-node-driver-pd2rq" WorkloadEndpoint="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:22.799642 containerd[1580]: 2025-07-11 00:30:22.719 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Namespace="calico-system" Pod="csi-node-driver-pd2rq" WorkloadEndpoint="localhost-k8s-csi--node--driver--pd2rq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pd2rq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"096ddfe1-2570-48e6-b110-9f8e8c0f803b", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538", Pod:"csi-node-driver-pd2rq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99b0732f781", MAC:"1a:79:0f:96:f3:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:22.799642 containerd[1580]: 2025-07-11 00:30:22.791 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538" Namespace="calico-system" Pod="csi-node-driver-pd2rq" WorkloadEndpoint="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:22.871390 containerd[1580]: time="2025-07-11T00:30:22.869869686Z" level=info msg="StartContainer for \"292c52db9b6e176b75877411f5256e8a8fdf2970db58c02bf2020d06b8582567\" returns successfully" Jul 11 00:30:22.890021 containerd[1580]: time="2025-07-11T00:30:22.889840889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:30:22.893478 containerd[1580]: time="2025-07-11T00:30:22.890728142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:30:22.893478 containerd[1580]: time="2025-07-11T00:30:22.891872742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:22.893478 containerd[1580]: time="2025-07-11T00:30:22.892470620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:22.893649 systemd-networkd[1247]: cali125eb587a2c: Link UP Jul 11 00:30:22.895163 systemd-networkd[1247]: cali125eb587a2c: Gained carrier Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.709 [INFO][4968] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0 coredns-7c65d6cfc9- kube-system 02706169-3e5d-4d0e-89ad-f0621f887573 1040 0 2025-07-11 00:29:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-6zs7m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali125eb587a2c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6zs7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6zs7m-" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.710 [INFO][4968] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6zs7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.754 [INFO][5048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" HandleID="k8s-pod-network.38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.755 [INFO][5048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" HandleID="k8s-pod-network.38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000519270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-6zs7m", "timestamp":"2025-07-11 00:30:22.754864288 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.755 [INFO][5048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.755 [INFO][5048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.755 [INFO][5048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.794 [INFO][5048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" host="localhost" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.851 [INFO][5048] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.860 [INFO][5048] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.862 [INFO][5048] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.864 [INFO][5048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.864 [INFO][5048] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" host="localhost" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.866 [INFO][5048] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.872 [INFO][5048] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" host="localhost" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.883 [INFO][5048] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" host="localhost" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.883 [INFO][5048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" host="localhost" Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.883 [INFO][5048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:22.926063 containerd[1580]: 2025-07-11 00:30:22.883 [INFO][5048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" HandleID="k8s-pod-network.38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:22.927218 containerd[1580]: 2025-07-11 00:30:22.889 [INFO][4968] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6zs7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02706169-3e5d-4d0e-89ad-f0621f887573", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-6zs7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali125eb587a2c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:22.927218 containerd[1580]: 2025-07-11 00:30:22.889 [INFO][4968] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6zs7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:22.927218 containerd[1580]: 2025-07-11 00:30:22.889 [INFO][4968] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali125eb587a2c ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6zs7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:22.927218 containerd[1580]: 2025-07-11 00:30:22.894 [INFO][4968] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6zs7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:22.927218 containerd[1580]: 2025-07-11 00:30:22.894 [INFO][4968] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6zs7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02706169-3e5d-4d0e-89ad-f0621f887573", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db", Pod:"coredns-7c65d6cfc9-6zs7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali125eb587a2c", MAC:"ea:bb:fc:a5:03:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:22.927218 containerd[1580]: 2025-07-11 00:30:22.920 [INFO][4968] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6zs7m" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:22.941572 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:30:22.967105 containerd[1580]: time="2025-07-11T00:30:22.966604240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pd2rq,Uid:096ddfe1-2570-48e6-b110-9f8e8c0f803b,Namespace:calico-system,Attempt:1,} returns sandbox id \"7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538\"" Jul 11 00:30:22.972137 containerd[1580]: time="2025-07-11T00:30:22.972036128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:30:22.972137 containerd[1580]: time="2025-07-11T00:30:22.972102804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:30:22.972137 containerd[1580]: time="2025-07-11T00:30:22.972121810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:22.973778 containerd[1580]: time="2025-07-11T00:30:22.973656646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:23.008840 systemd-networkd[1247]: cali27c56c3f492: Link UP Jul 11 00:30:23.009999 systemd-networkd[1247]: cali27c56c3f492: Gained carrier Jul 11 00:30:23.012260 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.710 [INFO][4977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--lld94-eth0 goldmane-58fd7646b9- calico-system f15d712e-6e3d-479f-9309-711c53706f83 1042 0 2025-07-11 00:29:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-lld94 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali27c56c3f492 [] [] }} ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Namespace="calico-system" Pod="goldmane-58fd7646b9-lld94" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--lld94-" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.710 [INFO][4977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Namespace="calico-system" Pod="goldmane-58fd7646b9-lld94" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.759 [INFO][5046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" HandleID="k8s-pod-network.825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.760 [INFO][5046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" HandleID="k8s-pod-network.825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-lld94", "timestamp":"2025-07-11 00:30:22.759827353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.760 [INFO][5046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.883 [INFO][5046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.884 [INFO][5046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.904 [INFO][5046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" host="localhost" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.954 [INFO][5046] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.963 [INFO][5046] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.967 [INFO][5046] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.972 [INFO][5046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.972 [INFO][5046] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" host="localhost" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.976 [INFO][5046] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.985 [INFO][5046] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" host="localhost" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.995 [INFO][5046] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" host="localhost" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.996 [INFO][5046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" host="localhost" Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.996 [INFO][5046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:23.030785 containerd[1580]: 2025-07-11 00:30:22.996 [INFO][5046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" HandleID="k8s-pod-network.825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:23.031367 containerd[1580]: 2025-07-11 00:30:23.001 [INFO][4977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Namespace="calico-system" Pod="goldmane-58fd7646b9-lld94" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--lld94-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f15d712e-6e3d-479f-9309-711c53706f83", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-lld94", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali27c56c3f492", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:23.031367 containerd[1580]: 2025-07-11 00:30:23.001 [INFO][4977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Namespace="calico-system" Pod="goldmane-58fd7646b9-lld94" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:23.031367 containerd[1580]: 2025-07-11 00:30:23.002 [INFO][4977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27c56c3f492 ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Namespace="calico-system" Pod="goldmane-58fd7646b9-lld94" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:23.031367 containerd[1580]: 2025-07-11 00:30:23.010 [INFO][4977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Namespace="calico-system" Pod="goldmane-58fd7646b9-lld94" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:23.031367 containerd[1580]: 2025-07-11 00:30:23.011 [INFO][4977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Namespace="calico-system" Pod="goldmane-58fd7646b9-lld94" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--lld94-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f15d712e-6e3d-479f-9309-711c53706f83", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a", Pod:"goldmane-58fd7646b9-lld94", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali27c56c3f492", MAC:"12:df:20:d0:65:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:23.031367 containerd[1580]: 2025-07-11 00:30:23.025 [INFO][4977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a" Namespace="calico-system" Pod="goldmane-58fd7646b9-lld94" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:23.053103 containerd[1580]: time="2025-07-11T00:30:23.053055602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6zs7m,Uid:02706169-3e5d-4d0e-89ad-f0621f887573,Namespace:kube-system,Attempt:1,} returns sandbox id \"38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db\"" Jul 11 00:30:23.054923 kubelet[2768]: E0711 00:30:23.054572 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:23.058564 containerd[1580]: time="2025-07-11T00:30:23.058402129Z" level=info msg="CreateContainer within sandbox \"38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:30:23.069566 containerd[1580]: time="2025-07-11T00:30:23.069427962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:30:23.069566 containerd[1580]: time="2025-07-11T00:30:23.069496551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:30:23.069566 containerd[1580]: time="2025-07-11T00:30:23.069528051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:23.069817 containerd[1580]: time="2025-07-11T00:30:23.069637838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:30:23.087691 containerd[1580]: time="2025-07-11T00:30:23.087621738Z" level=info msg="CreateContainer within sandbox \"38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ae4e389851d770f36f550545705a7e2b3ec76464e53991d13fbc0c0fb162e14\"" Jul 11 00:30:23.088976 containerd[1580]: time="2025-07-11T00:30:23.088940977Z" level=info msg="StartContainer for \"1ae4e389851d770f36f550545705a7e2b3ec76464e53991d13fbc0c0fb162e14\"" Jul 11 00:30:23.106273 systemd-resolved[1473]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:30:23.140696 systemd-networkd[1247]: cali03076fb1967: Gained IPv6LL Jul 11 00:30:23.165077 containerd[1580]: time="2025-07-11T00:30:23.165026236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-lld94,Uid:f15d712e-6e3d-479f-9309-711c53706f83,Namespace:calico-system,Attempt:1,} returns sandbox id \"825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a\"" Jul 11 00:30:23.186925 containerd[1580]: time="2025-07-11T00:30:23.186646948Z" level=info msg="StartContainer for \"1ae4e389851d770f36f550545705a7e2b3ec76464e53991d13fbc0c0fb162e14\" returns successfully" Jul 11 00:30:23.520537 kubelet[2768]: E0711 00:30:23.520502 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:23.524624 systemd-networkd[1247]: cali89d0291a004: Gained IPv6LL Jul 11 00:30:23.526850 kubelet[2768]: E0711 00:30:23.526735 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:23.584860 systemd-networkd[1247]: cali45a8ff2888e: Gained IPv6LL Jul 11 00:30:23.608182 kubelet[2768]: I0711 00:30:23.607963 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-58z5c" podStartSLOduration=50.607941473 podStartE2EDuration="50.607941473s" podCreationTimestamp="2025-07-11 00:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:30:23.606120478 +0000 UTC m=+54.911610643" watchObservedRunningTime="2025-07-11 00:30:23.607941473 +0000 UTC m=+54.913431618" Jul 11 00:30:23.968877 systemd-networkd[1247]: cali99b0732f781: Gained IPv6LL Jul 11 00:30:24.035188 kubelet[2768]: I0711 00:30:24.035126 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6zs7m" podStartSLOduration=51.035106329 podStartE2EDuration="51.035106329s" podCreationTimestamp="2025-07-11 00:29:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:30:24.034617036 +0000 UTC m=+55.340107182" watchObservedRunningTime="2025-07-11 00:30:24.035106329 +0000 UTC m=+55.340596474" Jul 11 00:30:24.147789 containerd[1580]: time="2025-07-11T00:30:24.147706014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:24.154150 containerd[1580]: time="2025-07-11T00:30:24.154050984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 00:30:24.160953 containerd[1580]: time="2025-07-11T00:30:24.160884756Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:24.164416 containerd[1580]: time="2025-07-11T00:30:24.164337269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:24.165744 containerd[1580]: time="2025-07-11T00:30:24.165707796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.762863703s" Jul 11 00:30:24.165829 containerd[1580]: time="2025-07-11T00:30:24.165749033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 00:30:24.167376 containerd[1580]: time="2025-07-11T00:30:24.167320017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:30:24.168442 containerd[1580]: time="2025-07-11T00:30:24.168407899Z" level=info msg="CreateContainer within sandbox \"d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:30:24.193508 containerd[1580]: time="2025-07-11T00:30:24.193431858Z" level=info msg="CreateContainer within sandbox \"d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"20c3f1c0c55624670084e586ec78e383be9b9d3d71fe5bc6c5aef4f9e3596075\"" Jul 11 00:30:24.194197 containerd[1580]: time="2025-07-11T00:30:24.194168057Z" level=info msg="StartContainer for \"20c3f1c0c55624670084e586ec78e383be9b9d3d71fe5bc6c5aef4f9e3596075\"" Jul 11 00:30:24.327266 containerd[1580]: time="2025-07-11T00:30:24.327120769Z" level=info msg="StartContainer for \"20c3f1c0c55624670084e586ec78e383be9b9d3d71fe5bc6c5aef4f9e3596075\" returns successfully" Jul 11 00:30:24.534413 kubelet[2768]: E0711 00:30:24.534255 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:24.534980 kubelet[2768]: E0711 00:30:24.534538 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:24.672940 systemd-networkd[1247]: cali125eb587a2c: Gained IPv6LL Jul 11 00:30:24.802191 systemd-networkd[1247]: cali27c56c3f492: Gained IPv6LL Jul 11 00:30:25.536847 kubelet[2768]: E0711 00:30:25.536773 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:25.536847 kubelet[2768]: E0711 00:30:25.536856 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:26.149457 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:52546.service - OpenSSH per-connection server daemon (10.0.0.1:52546). Jul 11 00:30:26.209710 sshd[5321]: Accepted publickey for core from 10.0.0.1 port 52546 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:30:26.211759 sshd[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:30:26.216367 systemd-logind[1559]: New session 11 of user core. Jul 11 00:30:26.221334 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:30:26.394607 sshd[5321]: pam_unix(sshd:session): session closed for user core Jul 11 00:30:26.398763 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:52546.service: Deactivated successfully. Jul 11 00:30:26.401817 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:30:26.402252 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:30:26.403287 systemd-logind[1559]: Removed session 11. Jul 11 00:30:27.919886 containerd[1580]: time="2025-07-11T00:30:27.919796439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:27.922499 containerd[1580]: time="2025-07-11T00:30:27.922452508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 00:30:27.925206 containerd[1580]: time="2025-07-11T00:30:27.925121412Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:27.934365 containerd[1580]: time="2025-07-11T00:30:27.934306186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:27.935464 containerd[1580]: time="2025-07-11T00:30:27.935380403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.768015492s" Jul 11 00:30:27.935532 containerd[1580]: time="2025-07-11T00:30:27.935457048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 00:30:27.937698 containerd[1580]: time="2025-07-11T00:30:27.937649392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:30:27.948155 containerd[1580]: time="2025-07-11T00:30:27.948027627Z" level=info msg="CreateContainer within sandbox \"c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:30:28.403453 containerd[1580]: time="2025-07-11T00:30:28.403368906Z" level=info msg="CreateContainer within sandbox \"c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ecbb8b572055fb08f21e284a35f39970641110e9fbc5237330b991a03a864c57\"" Jul 11 00:30:28.404116 containerd[1580]: time="2025-07-11T00:30:28.404083273Z" level=info msg="StartContainer for \"ecbb8b572055fb08f21e284a35f39970641110e9fbc5237330b991a03a864c57\"" Jul 11 00:30:28.496837 containerd[1580]: time="2025-07-11T00:30:28.496782294Z" level=info msg="StartContainer for \"ecbb8b572055fb08f21e284a35f39970641110e9fbc5237330b991a03a864c57\" returns successfully" Jul 11 00:30:28.572170 kubelet[2768]: I0711 00:30:28.572065 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-755966d9f5-pjj4v" podStartSLOduration=35.137192562 podStartE2EDuration="41.572041372s" podCreationTimestamp="2025-07-11 00:29:47 +0000 UTC" firstStartedPulling="2025-07-11 00:30:21.502558215 +0000 UTC m=+52.808048360" lastFinishedPulling="2025-07-11 00:30:27.937407025 +0000 UTC m=+59.242897170" observedRunningTime="2025-07-11 00:30:28.568882836 +0000 UTC m=+59.874372981" watchObservedRunningTime="2025-07-11 00:30:28.572041372 +0000 UTC m=+59.877531517" Jul 11 00:30:28.778717 containerd[1580]: time="2025-07-11T00:30:28.778661788Z" level=info msg="StopPodSandbox for \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\"" Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.107 [WARNING][5424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0", GenerateName:"calico-apiserver-9bdd5cc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e88b327d-5464-447f-98d8-ca6429e58f91", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bdd5cc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2", Pod:"calico-apiserver-9bdd5cc8b-s5k2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89d0291a004", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.107 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.107 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" iface="eth0" netns="" Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.107 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.107 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.136 [INFO][5435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" HandleID="k8s-pod-network.236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.136 [INFO][5435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.136 [INFO][5435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.457 [WARNING][5435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" HandleID="k8s-pod-network.236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.459 [INFO][5435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" HandleID="k8s-pod-network.236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.521 [INFO][5435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:29.529121 containerd[1580]: 2025-07-11 00:30:29.524 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:29.529981 containerd[1580]: time="2025-07-11T00:30:29.529172884Z" level=info msg="TearDown network for sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\" successfully" Jul 11 00:30:29.529981 containerd[1580]: time="2025-07-11T00:30:29.529208310Z" level=info msg="StopPodSandbox for \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\" returns successfully" Jul 11 00:30:29.530045 containerd[1580]: time="2025-07-11T00:30:29.529979274Z" level=info msg="RemovePodSandbox for \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\"" Jul 11 00:30:29.532162 containerd[1580]: time="2025-07-11T00:30:29.532137645Z" level=info msg="Forcibly stopping sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\"" Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.957 [WARNING][5452] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0", GenerateName:"calico-apiserver-9bdd5cc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e88b327d-5464-447f-98d8-ca6429e58f91", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bdd5cc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2", Pod:"calico-apiserver-9bdd5cc8b-s5k2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89d0291a004", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.958 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.958 [INFO][5452] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" iface="eth0" netns="" Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.958 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.958 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.980 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" HandleID="k8s-pod-network.236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.980 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.980 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.986 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" HandleID="k8s-pod-network.236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.986 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" HandleID="k8s-pod-network.236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--s5k2t-eth0" Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.987 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:29.993537 containerd[1580]: 2025-07-11 00:30:29.990 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503" Jul 11 00:30:30.035119 containerd[1580]: time="2025-07-11T00:30:29.993592799Z" level=info msg="TearDown network for sandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\" successfully" Jul 11 00:30:30.258732 containerd[1580]: time="2025-07-11T00:30:30.258529314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:30:30.258732 containerd[1580]: time="2025-07-11T00:30:30.258638520Z" level=info msg="RemovePodSandbox \"236ba96548d6a7302d82ca88038e4e433b60709601f4f759c7aafcfe32949503\" returns successfully" Jul 11 00:30:30.259662 containerd[1580]: time="2025-07-11T00:30:30.259613539Z" level=info msg="StopPodSandbox for \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\"" Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.300 [WARNING][5477] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0", GenerateName:"calico-apiserver-9bdd5cc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a68c6842-d9b5-465c-a1bb-818f4874e778", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bdd5cc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b", Pod:"calico-apiserver-9bdd5cc8b-h8l9c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45a8ff2888e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.300 [INFO][5477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.300 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" iface="eth0" netns="" Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.300 [INFO][5477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.300 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.326 [INFO][5486] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" HandleID="k8s-pod-network.85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.326 [INFO][5486] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.326 [INFO][5486] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.335 [WARNING][5486] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" HandleID="k8s-pod-network.85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.335 [INFO][5486] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" HandleID="k8s-pod-network.85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.338 [INFO][5486] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:30.345322 containerd[1580]: 2025-07-11 00:30:30.341 [INFO][5477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:30.346477 containerd[1580]: time="2025-07-11T00:30:30.345383739Z" level=info msg="TearDown network for sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\" successfully" Jul 11 00:30:30.346477 containerd[1580]: time="2025-07-11T00:30:30.345436398Z" level=info msg="StopPodSandbox for \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\" returns successfully" Jul 11 00:30:30.346477 containerd[1580]: time="2025-07-11T00:30:30.346055436Z" level=info msg="RemovePodSandbox for \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\"" Jul 11 00:30:30.346477 containerd[1580]: time="2025-07-11T00:30:30.346112944Z" level=info msg="Forcibly stopping sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\"" Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.388 [WARNING][5505] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0", GenerateName:"calico-apiserver-9bdd5cc8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a68c6842-d9b5-465c-a1bb-818f4874e778", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bdd5cc8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b", Pod:"calico-apiserver-9bdd5cc8b-h8l9c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali45a8ff2888e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.388 [INFO][5505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.388 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" iface="eth0" netns="" Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.388 [INFO][5505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.388 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.416 [INFO][5513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" HandleID="k8s-pod-network.85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.416 [INFO][5513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.416 [INFO][5513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.427 [WARNING][5513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" HandleID="k8s-pod-network.85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.427 [INFO][5513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" HandleID="k8s-pod-network.85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Workload="localhost-k8s-calico--apiserver--9bdd5cc8b--h8l9c-eth0" Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.429 [INFO][5513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:30.434970 containerd[1580]: 2025-07-11 00:30:30.432 [INFO][5505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37" Jul 11 00:30:30.435400 containerd[1580]: time="2025-07-11T00:30:30.435032114Z" level=info msg="TearDown network for sandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\" successfully" Jul 11 00:30:30.441338 containerd[1580]: time="2025-07-11T00:30:30.441262382Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:30:30.441338 containerd[1580]: time="2025-07-11T00:30:30.441344888Z" level=info msg="RemovePodSandbox \"85becf68340652f6fb5daa88b4b63d01d6f61403fd405ee8084a15db1c6fba37\" returns successfully" Jul 11 00:30:30.442289 containerd[1580]: time="2025-07-11T00:30:30.441939529Z" level=info msg="StopPodSandbox for \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\"" Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.486 [WARNING][5531] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pd2rq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"096ddfe1-2570-48e6-b110-9f8e8c0f803b", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538", Pod:"csi-node-driver-pd2rq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99b0732f781", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.487 [INFO][5531] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.487 [INFO][5531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" iface="eth0" netns="" Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.487 [INFO][5531] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.487 [INFO][5531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.515 [INFO][5539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" HandleID="k8s-pod-network.50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.515 [INFO][5539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.515 [INFO][5539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.524 [WARNING][5539] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" HandleID="k8s-pod-network.50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.524 [INFO][5539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" HandleID="k8s-pod-network.50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.527 [INFO][5539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:30.534600 containerd[1580]: 2025-07-11 00:30:30.531 [INFO][5531] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:30.534600 containerd[1580]: time="2025-07-11T00:30:30.534507078Z" level=info msg="TearDown network for sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\" successfully" Jul 11 00:30:30.534600 containerd[1580]: time="2025-07-11T00:30:30.534540901Z" level=info msg="StopPodSandbox for \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\" returns successfully" Jul 11 00:30:30.535796 containerd[1580]: time="2025-07-11T00:30:30.535082994Z" level=info msg="RemovePodSandbox for \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\"" Jul 11 00:30:30.535796 containerd[1580]: time="2025-07-11T00:30:30.535117209Z" level=info msg="Forcibly stopping sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\"" Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.582 [WARNING][5557] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pd2rq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"096ddfe1-2570-48e6-b110-9f8e8c0f803b", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538", Pod:"csi-node-driver-pd2rq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99b0732f781", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.582 [INFO][5557] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.582 [INFO][5557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" iface="eth0" netns="" Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.582 [INFO][5557] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.582 [INFO][5557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.610 [INFO][5566] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" HandleID="k8s-pod-network.50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.610 [INFO][5566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.610 [INFO][5566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.617 [WARNING][5566] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" HandleID="k8s-pod-network.50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.617 [INFO][5566] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" HandleID="k8s-pod-network.50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Workload="localhost-k8s-csi--node--driver--pd2rq-eth0" Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.619 [INFO][5566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:30.627262 containerd[1580]: 2025-07-11 00:30:30.623 [INFO][5557] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e" Jul 11 00:30:30.627774 containerd[1580]: time="2025-07-11T00:30:30.627314339Z" level=info msg="TearDown network for sandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\" successfully" Jul 11 00:30:30.640085 containerd[1580]: time="2025-07-11T00:30:30.640011062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:30:30.640258 containerd[1580]: time="2025-07-11T00:30:30.640108135Z" level=info msg="RemovePodSandbox \"50414be980d475f1c49c5eb8f7468024bf3c2b2b65f6813e64aba9bb4913373e\" returns successfully" Jul 11 00:30:30.641118 containerd[1580]: time="2025-07-11T00:30:30.640753111Z" level=info msg="StopPodSandbox for \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\"" Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.677 [WARNING][5584] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--lld94-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f15d712e-6e3d-479f-9309-711c53706f83", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a", Pod:"goldmane-58fd7646b9-lld94", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali27c56c3f492", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.677 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.677 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" iface="eth0" netns="" Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.677 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.677 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.701 [INFO][5592] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" HandleID="k8s-pod-network.8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.701 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.701 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.707 [WARNING][5592] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" HandleID="k8s-pod-network.8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.707 [INFO][5592] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" HandleID="k8s-pod-network.8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.710 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:30.715925 containerd[1580]: 2025-07-11 00:30:30.713 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:30.716514 containerd[1580]: time="2025-07-11T00:30:30.715974149Z" level=info msg="TearDown network for sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\" successfully" Jul 11 00:30:30.716514 containerd[1580]: time="2025-07-11T00:30:30.716003574Z" level=info msg="StopPodSandbox for \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\" returns successfully" Jul 11 00:30:30.716514 containerd[1580]: time="2025-07-11T00:30:30.716489661Z" level=info msg="RemovePodSandbox for \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\"" Jul 11 00:30:30.716621 containerd[1580]: time="2025-07-11T00:30:30.716523835Z" level=info msg="Forcibly stopping sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\"" Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.754 [WARNING][5609] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--lld94-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"f15d712e-6e3d-479f-9309-711c53706f83", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a", Pod:"goldmane-58fd7646b9-lld94", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali27c56c3f492", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.754 [INFO][5609] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.754 [INFO][5609] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" iface="eth0" netns="" Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.754 [INFO][5609] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.755 [INFO][5609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.781 [INFO][5618] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" HandleID="k8s-pod-network.8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.781 [INFO][5618] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.781 [INFO][5618] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.788 [WARNING][5618] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" HandleID="k8s-pod-network.8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.788 [INFO][5618] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" HandleID="k8s-pod-network.8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Workload="localhost-k8s-goldmane--58fd7646b9--lld94-eth0" Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.791 [INFO][5618] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:30.800542 containerd[1580]: 2025-07-11 00:30:30.796 [INFO][5609] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037" Jul 11 00:30:30.800542 containerd[1580]: time="2025-07-11T00:30:30.800519540Z" level=info msg="TearDown network for sandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\" successfully" Jul 11 00:30:30.807501 containerd[1580]: time="2025-07-11T00:30:30.807458195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:30:30.807662 containerd[1580]: time="2025-07-11T00:30:30.807529118Z" level=info msg="RemovePodSandbox \"8ce4840fe097f2a32b5193b85c5e7870d23cc95254ed624560b76e4ce4d2f037\" returns successfully" Jul 11 00:30:30.808093 containerd[1580]: time="2025-07-11T00:30:30.808069317Z" level=info msg="StopPodSandbox for \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\"" Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.843 [WARNING][5635] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" WorkloadEndpoint="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.844 [INFO][5635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.844 [INFO][5635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" iface="eth0" netns="" Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.844 [INFO][5635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.844 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.870 [INFO][5645] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" HandleID="k8s-pod-network.a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Workload="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.870 [INFO][5645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.870 [INFO][5645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.876 [WARNING][5645] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" HandleID="k8s-pod-network.a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Workload="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.876 [INFO][5645] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" HandleID="k8s-pod-network.a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Workload="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.877 [INFO][5645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:30.883246 containerd[1580]: 2025-07-11 00:30:30.880 [INFO][5635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:30.883795 containerd[1580]: time="2025-07-11T00:30:30.883280156Z" level=info msg="TearDown network for sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\" successfully" Jul 11 00:30:30.883795 containerd[1580]: time="2025-07-11T00:30:30.883305623Z" level=info msg="StopPodSandbox for \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\" returns successfully" Jul 11 00:30:30.898886 containerd[1580]: time="2025-07-11T00:30:30.883910945Z" level=info msg="RemovePodSandbox for \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\"" Jul 11 00:30:30.898886 containerd[1580]: time="2025-07-11T00:30:30.883944107Z" level=info msg="Forcibly stopping sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\"" Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.923 [WARNING][5663] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" WorkloadEndpoint="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.923 [INFO][5663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.923 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" iface="eth0" netns="" Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.923 [INFO][5663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.923 [INFO][5663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.947 [INFO][5672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" HandleID="k8s-pod-network.a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Workload="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.947 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.947 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.953 [WARNING][5672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" HandleID="k8s-pod-network.a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Workload="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.953 [INFO][5672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" HandleID="k8s-pod-network.a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Workload="localhost-k8s-whisker--858c6d685b--4rt4c-eth0" Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.954 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:30.960032 containerd[1580]: 2025-07-11 00:30:30.957 [INFO][5663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2" Jul 11 00:30:30.960415 containerd[1580]: time="2025-07-11T00:30:30.960045696Z" level=info msg="TearDown network for sandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\" successfully" Jul 11 00:30:31.012931 containerd[1580]: time="2025-07-11T00:30:31.012842866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:30:31.012931 containerd[1580]: time="2025-07-11T00:30:31.012940410Z" level=info msg="RemovePodSandbox \"a9114d124fdc5b09dbe9e80848aefe797a65977c4825e03078d9c96b3aaa24a2\" returns successfully" Jul 11 00:30:31.013805 containerd[1580]: time="2025-07-11T00:30:31.013642244Z" level=info msg="StopPodSandbox for \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\"" Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.057 [WARNING][5689] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d8669555-3bbe-4e3a-b6d1-dde636ebecce", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7", Pod:"coredns-7c65d6cfc9-58z5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03076fb1967", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.057 [INFO][5689] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.057 [INFO][5689] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" iface="eth0" netns="" Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.057 [INFO][5689] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.057 [INFO][5689] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.100 [INFO][5698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" HandleID="k8s-pod-network.b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.100 [INFO][5698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.100 [INFO][5698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.109 [WARNING][5698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" HandleID="k8s-pod-network.b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.109 [INFO][5698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" HandleID="k8s-pod-network.b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.111 [INFO][5698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:31.118913 containerd[1580]: 2025-07-11 00:30:31.115 [INFO][5689] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:31.118913 containerd[1580]: time="2025-07-11T00:30:31.118660338Z" level=info msg="TearDown network for sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\" successfully" Jul 11 00:30:31.118913 containerd[1580]: time="2025-07-11T00:30:31.118710494Z" level=info msg="StopPodSandbox for \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\" returns successfully" Jul 11 00:30:31.119559 containerd[1580]: time="2025-07-11T00:30:31.119221868Z" level=info msg="RemovePodSandbox for \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\"" Jul 11 00:30:31.119559 containerd[1580]: time="2025-07-11T00:30:31.119260631Z" level=info msg="Forcibly stopping sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\"" Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.159 [WARNING][5717] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d8669555-3bbe-4e3a-b6d1-dde636ebecce", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2aa7a8fd128e9d7996c0f4b245548e549b164d8d9709060328ced54a27b9dbf7", Pod:"coredns-7c65d6cfc9-58z5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali03076fb1967", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.159 [INFO][5717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.159 [INFO][5717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" iface="eth0" netns="" Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.159 [INFO][5717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.159 [INFO][5717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.184 [INFO][5729] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" HandleID="k8s-pod-network.b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.184 [INFO][5729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.184 [INFO][5729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.191 [WARNING][5729] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" HandleID="k8s-pod-network.b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.191 [INFO][5729] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" HandleID="k8s-pod-network.b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Workload="localhost-k8s-coredns--7c65d6cfc9--58z5c-eth0" Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.193 [INFO][5729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:31.198748 containerd[1580]: 2025-07-11 00:30:31.195 [INFO][5717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f" Jul 11 00:30:31.199196 containerd[1580]: time="2025-07-11T00:30:31.198799251Z" level=info msg="TearDown network for sandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\" successfully" Jul 11 00:30:31.336155 containerd[1580]: time="2025-07-11T00:30:31.336095390Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:30:31.336320 containerd[1580]: time="2025-07-11T00:30:31.336205548Z" level=info msg="RemovePodSandbox \"b518a0772da420583e69f2ec0d68df8cbe6d3bf74f55921fe42faf4388f8d80f\" returns successfully" Jul 11 00:30:31.337220 containerd[1580]: time="2025-07-11T00:30:31.336983374Z" level=info msg="StopPodSandbox for \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\"" Jul 11 00:30:31.404078 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:52554.service - OpenSSH per-connection server daemon (10.0.0.1:52554). Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.383 [WARNING][5749] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0", GenerateName:"calico-kube-controllers-755966d9f5-", Namespace:"calico-system", SelfLink:"", UID:"63fd5fca-6d0d-445a-b193-a8d57e21492f", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"755966d9f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd", Pod:"calico-kube-controllers-755966d9f5-pjj4v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3769c44865", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.383 [INFO][5749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.383 [INFO][5749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" iface="eth0" netns="" Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.383 [INFO][5749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.383 [INFO][5749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.413 [INFO][5758] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" HandleID="k8s-pod-network.1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.414 [INFO][5758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.414 [INFO][5758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.421 [WARNING][5758] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" HandleID="k8s-pod-network.1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.421 [INFO][5758] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" HandleID="k8s-pod-network.1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.423 [INFO][5758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:31.430982 containerd[1580]: 2025-07-11 00:30:31.427 [INFO][5749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:31.431533 containerd[1580]: time="2025-07-11T00:30:31.430993142Z" level=info msg="TearDown network for sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\" successfully" Jul 11 00:30:31.431533 containerd[1580]: time="2025-07-11T00:30:31.431029521Z" level=info msg="StopPodSandbox for \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\" returns successfully" Jul 11 00:30:31.431815 containerd[1580]: time="2025-07-11T00:30:31.431788742Z" level=info msg="RemovePodSandbox for \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\"" Jul 11 00:30:31.431860 containerd[1580]: time="2025-07-11T00:30:31.431825170Z" level=info msg="Forcibly stopping sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\"" Jul 11 00:30:31.486707 sshd[5764]: Accepted publickey for core from 10.0.0.1 port 52554 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:30:31.488958 sshd[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:30:31.497555 systemd-logind[1559]: New session 12 of user core. Jul 11 00:30:31.513622 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.481 [WARNING][5777] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0", GenerateName:"calico-kube-controllers-755966d9f5-", Namespace:"calico-system", SelfLink:"", UID:"63fd5fca-6d0d-445a-b193-a8d57e21492f", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"755966d9f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0ecf3c3ffbb08ed14f0cef3281226f2558d84bb92685cbe57017a394c2f76cd", Pod:"calico-kube-controllers-755966d9f5-pjj4v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid3769c44865", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.481 [INFO][5777] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.481 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" iface="eth0" netns="" Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.481 [INFO][5777] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.481 [INFO][5777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.510 [INFO][5786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" HandleID="k8s-pod-network.1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.510 [INFO][5786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.510 [INFO][5786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.517 [WARNING][5786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" HandleID="k8s-pod-network.1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.517 [INFO][5786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" HandleID="k8s-pod-network.1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Workload="localhost-k8s-calico--kube--controllers--755966d9f5--pjj4v-eth0" Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.520 [INFO][5786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:31.528137 containerd[1580]: 2025-07-11 00:30:31.524 [INFO][5777] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0" Jul 11 00:30:31.528733 containerd[1580]: time="2025-07-11T00:30:31.528197072Z" level=info msg="TearDown network for sandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\" successfully" Jul 11 00:30:32.182080 containerd[1580]: time="2025-07-11T00:30:32.181994310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:30:32.182644 containerd[1580]: time="2025-07-11T00:30:32.182105339Z" level=info msg="RemovePodSandbox \"1f3d2f96b40c4e6f975b11d80c7eb97250e41522b74bfacc75ed75cf27d803f0\" returns successfully" Jul 11 00:30:32.183720 containerd[1580]: time="2025-07-11T00:30:32.183317375Z" level=info msg="StopPodSandbox for \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\"" Jul 11 00:30:32.209972 sshd[5764]: pam_unix(sshd:session): session closed for user core Jul 11 00:30:32.215477 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:52554.service: Deactivated successfully. Jul 11 00:30:32.215512 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:30:32.223993 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:30:32.225635 systemd-logind[1559]: Removed session 12. Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.238 [WARNING][5814] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02706169-3e5d-4d0e-89ad-f0621f887573", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db", Pod:"coredns-7c65d6cfc9-6zs7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali125eb587a2c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.238 [INFO][5814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.238 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" iface="eth0" netns="" Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.238 [INFO][5814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.238 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.272 [INFO][5830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" HandleID="k8s-pod-network.ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.272 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.272 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.279 [WARNING][5830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" HandleID="k8s-pod-network.ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.279 [INFO][5830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" HandleID="k8s-pod-network.ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.280 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:32.286604 containerd[1580]: 2025-07-11 00:30:32.283 [INFO][5814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:32.287169 containerd[1580]: time="2025-07-11T00:30:32.286645882Z" level=info msg="TearDown network for sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\" successfully" Jul 11 00:30:32.287169 containerd[1580]: time="2025-07-11T00:30:32.286699053Z" level=info msg="StopPodSandbox for \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\" returns successfully" Jul 11 00:30:32.287309 containerd[1580]: time="2025-07-11T00:30:32.287284536Z" level=info msg="RemovePodSandbox for \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\"" Jul 11 00:30:32.287382 containerd[1580]: time="2025-07-11T00:30:32.287317228Z" level=info msg="Forcibly stopping sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\"" Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.341 [WARNING][5848] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"02706169-3e5d-4d0e-89ad-f0621f887573", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 29, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"38a2a3bb505ed3c2d3bd616037ab4a1fb982d1b8d393869c9f9a5749e30f38db", Pod:"coredns-7c65d6cfc9-6zs7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali125eb587a2c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.341 [INFO][5848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.341 [INFO][5848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" iface="eth0" netns="" Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.341 [INFO][5848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.341 [INFO][5848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.368 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" HandleID="k8s-pod-network.ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.368 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.368 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.377 [WARNING][5856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" HandleID="k8s-pod-network.ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.377 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" HandleID="k8s-pod-network.ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Workload="localhost-k8s-coredns--7c65d6cfc9--6zs7m-eth0" Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.379 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:30:32.385584 containerd[1580]: 2025-07-11 00:30:32.382 [INFO][5848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f" Jul 11 00:30:32.386139 containerd[1580]: time="2025-07-11T00:30:32.385655877Z" level=info msg="TearDown network for sandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\" successfully" Jul 11 00:30:32.391033 containerd[1580]: time="2025-07-11T00:30:32.390821357Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:30:32.391033 containerd[1580]: time="2025-07-11T00:30:32.390895146Z" level=info msg="RemovePodSandbox \"ea2c4d007d20f0815b76fdd1b2d47710d98cbfbd4bc0587fbd216172c1be807f\" returns successfully" Jul 11 00:30:33.828366 kubelet[2768]: E0711 00:30:33.828323 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:34.815840 containerd[1580]: time="2025-07-11T00:30:34.815764392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:34.849123 containerd[1580]: time="2025-07-11T00:30:34.849044846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 00:30:34.913131 containerd[1580]: time="2025-07-11T00:30:34.913043111Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:34.937735 containerd[1580]: time="2025-07-11T00:30:34.937633272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:34.938669 containerd[1580]: time="2025-07-11T00:30:34.938595285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 7.0008755s" Jul 11 00:30:34.938669 containerd[1580]: time="2025-07-11T00:30:34.938647894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:30:34.940050 containerd[1580]: time="2025-07-11T00:30:34.939995264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:30:34.941212 containerd[1580]: time="2025-07-11T00:30:34.941182422Z" level=info msg="CreateContainer within sandbox \"50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:30:35.628243 containerd[1580]: time="2025-07-11T00:30:35.628175288Z" level=info msg="CreateContainer within sandbox \"50f2678cb037c92799863ecb4b3c07812f86871095c53dc50a6c24ab769bbca2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f2744d24c77d1ebd1319ac2ae6d4306d0e29c274041183b0b1ca3862aaa30e1b\"" Jul 11 00:30:35.629016 containerd[1580]: time="2025-07-11T00:30:35.628977901Z" level=info msg="StartContainer for \"f2744d24c77d1ebd1319ac2ae6d4306d0e29c274041183b0b1ca3862aaa30e1b\"" Jul 11 00:30:35.825666 containerd[1580]: time="2025-07-11T00:30:35.825584401Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:35.826482 containerd[1580]: time="2025-07-11T00:30:35.825618085Z" level=info msg="StartContainer for \"f2744d24c77d1ebd1319ac2ae6d4306d0e29c274041183b0b1ca3862aaa30e1b\" returns successfully" Jul 11 00:30:35.830371 containerd[1580]: time="2025-07-11T00:30:35.829634116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:30:35.834307 containerd[1580]: time="2025-07-11T00:30:35.834155058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 894.103057ms" Jul 11 00:30:35.834307 containerd[1580]: time="2025-07-11T00:30:35.834194512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:30:35.835474 containerd[1580]: time="2025-07-11T00:30:35.835429802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:30:35.837149 containerd[1580]: time="2025-07-11T00:30:35.837094700Z" level=info msg="CreateContainer within sandbox \"663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:30:35.869753 containerd[1580]: time="2025-07-11T00:30:35.869611410Z" level=info msg="CreateContainer within sandbox \"663b6ed0560fec3c1591fde420d34bac1f64067a66dc4a66b684c1c51451d52b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7801db760078d0e1d13dd8e9de1d4d94094eaf767c6a33b3de3fbad8ae928cf6\"" Jul 11 00:30:35.871629 containerd[1580]: time="2025-07-11T00:30:35.871572898Z" level=info msg="StartContainer for \"7801db760078d0e1d13dd8e9de1d4d94094eaf767c6a33b3de3fbad8ae928cf6\"" Jul 11 00:30:35.971116 containerd[1580]: time="2025-07-11T00:30:35.971047416Z" level=info msg="StartContainer for \"7801db760078d0e1d13dd8e9de1d4d94094eaf767c6a33b3de3fbad8ae928cf6\" returns successfully" Jul 11 00:30:36.836420 kubelet[2768]: I0711 00:30:36.836345 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9bdd5cc8b-s5k2t" podStartSLOduration=41.842631578 podStartE2EDuration="54.836316254s" podCreationTimestamp="2025-07-11 00:29:42 +0000 UTC" firstStartedPulling="2025-07-11 00:30:21.945969285 +0000 UTC m=+53.251459430" lastFinishedPulling="2025-07-11 00:30:34.939653961 +0000 UTC m=+66.245144106" observedRunningTime="2025-07-11 00:30:36.792386703 +0000 UTC m=+68.097876848" watchObservedRunningTime="2025-07-11 00:30:36.836316254 +0000 UTC m=+68.141806399" Jul 11 00:30:37.002357 kubelet[2768]: I0711 00:30:37.002275 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9bdd5cc8b-h8l9c" podStartSLOduration=41.897708539999996 podStartE2EDuration="55.002238732s" podCreationTimestamp="2025-07-11 00:29:42 +0000 UTC" firstStartedPulling="2025-07-11 00:30:22.73057605 +0000 UTC m=+54.036066195" lastFinishedPulling="2025-07-11 00:30:35.835106242 +0000 UTC m=+67.140596387" observedRunningTime="2025-07-11 00:30:36.999601791 +0000 UTC m=+68.305091936" watchObservedRunningTime="2025-07-11 00:30:37.002238732 +0000 UTC m=+68.307728887" Jul 11 00:30:37.230141 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:39336.service - OpenSSH per-connection server daemon (10.0.0.1:39336). Jul 11 00:30:37.350797 sshd[5974]: Accepted publickey for core from 10.0.0.1 port 39336 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:30:37.354586 sshd[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:30:37.362711 systemd-logind[1559]: New session 13 of user core. Jul 11 00:30:37.372739 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:30:37.585797 kubelet[2768]: I0711 00:30:37.585374 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:30:37.633101 sshd[5974]: pam_unix(sshd:session): session closed for user core Jul 11 00:30:37.638630 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:39336.service: Deactivated successfully. Jul 11 00:30:37.643494 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:30:37.644446 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:30:37.648489 systemd-logind[1559]: Removed session 13. Jul 11 00:30:38.587616 kubelet[2768]: I0711 00:30:38.587505 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:30:39.118160 containerd[1580]: time="2025-07-11T00:30:39.118104953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:39.138216 containerd[1580]: time="2025-07-11T00:30:39.138121590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 00:30:39.222119 containerd[1580]: time="2025-07-11T00:30:39.221049873Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:39.278994 containerd[1580]: time="2025-07-11T00:30:39.278929304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:39.281126 containerd[1580]: time="2025-07-11T00:30:39.281081210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.445600523s" Jul 11 00:30:39.281242 containerd[1580]: time="2025-07-11T00:30:39.281134520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 00:30:39.285479 containerd[1580]: time="2025-07-11T00:30:39.285449173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:30:39.287096 containerd[1580]: time="2025-07-11T00:30:39.287064477Z" level=info msg="CreateContainer within sandbox \"7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:30:40.155125 kernel: hrtimer: interrupt took 10490021 ns Jul 11 00:30:40.685064 containerd[1580]: time="2025-07-11T00:30:40.684669659Z" level=info msg="CreateContainer within sandbox \"7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f36f6ba5a4db986fdd1bd27984b97b7e0c52ffa08155dfa2cb27337b04b4142e\"" Jul 11 00:30:40.691457 containerd[1580]: time="2025-07-11T00:30:40.688300573Z" level=info msg="StartContainer for \"f36f6ba5a4db986fdd1bd27984b97b7e0c52ffa08155dfa2cb27337b04b4142e\"" Jul 11 00:30:41.092340 containerd[1580]: time="2025-07-11T00:30:41.092180882Z" level=info msg="StartContainer for \"f36f6ba5a4db986fdd1bd27984b97b7e0c52ffa08155dfa2cb27337b04b4142e\" returns successfully" Jul 11 00:30:42.650114 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:39352.service - OpenSSH per-connection server daemon (10.0.0.1:39352). Jul 11 00:30:42.694873 sshd[6073]: Accepted publickey for core from 10.0.0.1 port 39352 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:30:42.696911 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:30:42.702265 systemd-logind[1559]: New session 14 of user core. Jul 11 00:30:42.715171 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:30:42.829490 kubelet[2768]: E0711 00:30:42.829407 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:44.428711 sshd[6073]: pam_unix(sshd:session): session closed for user core Jul 11 00:30:44.441514 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:39368.service - OpenSSH per-connection server daemon (10.0.0.1:39368). Jul 11 00:30:44.442231 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:39352.service: Deactivated successfully. Jul 11 00:30:44.449172 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:30:44.450197 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:30:44.453066 systemd-logind[1559]: Removed session 14. Jul 11 00:30:44.478173 sshd[6088]: Accepted publickey for core from 10.0.0.1 port 39368 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:30:44.480279 sshd[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:30:44.485794 systemd-logind[1559]: New session 15 of user core. Jul 11 00:30:44.492064 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:30:44.705028 systemd-resolved[1473]: Under memory pressure, flushing caches. Jul 11 00:30:44.720046 systemd-journald[1160]: Under memory pressure, flushing caches. Jul 11 00:30:44.705073 systemd-resolved[1473]: Flushed all caches. Jul 11 00:30:44.893831 sshd[6088]: pam_unix(sshd:session): session closed for user core Jul 11 00:30:44.908118 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:39370.service - OpenSSH per-connection server daemon (10.0.0.1:39370). Jul 11 00:30:44.908737 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:39368.service: Deactivated successfully. Jul 11 00:30:44.913418 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:30:44.914630 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:30:44.915659 systemd-logind[1559]: Removed session 15. Jul 11 00:30:44.944958 sshd[6102]: Accepted publickey for core from 10.0.0.1 port 39370 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:30:44.946930 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:30:44.954155 systemd-logind[1559]: New session 16 of user core. Jul 11 00:30:44.964229 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:30:45.887437 sshd[6102]: pam_unix(sshd:session): session closed for user core Jul 11 00:30:45.892381 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:39370.service: Deactivated successfully. Jul 11 00:30:45.895857 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:30:45.895877 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:30:45.897264 systemd-logind[1559]: Removed session 16. Jul 11 00:30:46.287072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891715650.mount: Deactivated successfully. Jul 11 00:30:48.738993 systemd-resolved[1473]: Under memory pressure, flushing caches. Jul 11 00:30:48.739003 systemd-resolved[1473]: Flushed all caches. Jul 11 00:30:48.750722 systemd-journald[1160]: Under memory pressure, flushing caches. Jul 11 00:30:49.234021 containerd[1580]: time="2025-07-11T00:30:49.232298294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:49.234651 containerd[1580]: time="2025-07-11T00:30:49.234589520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 00:30:49.238820 containerd[1580]: time="2025-07-11T00:30:49.238514095Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:49.243595 containerd[1580]: time="2025-07-11T00:30:49.243426951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:49.250578 containerd[1580]: time="2025-07-11T00:30:49.244447373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 9.958816828s" Jul 11 00:30:49.250578 containerd[1580]: time="2025-07-11T00:30:49.244513347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 00:30:49.250578 containerd[1580]: time="2025-07-11T00:30:49.248075018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:30:49.250578 containerd[1580]: time="2025-07-11T00:30:49.249902741Z" level=info msg="CreateContainer within sandbox \"825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:30:49.277312 containerd[1580]: time="2025-07-11T00:30:49.277083671Z" level=info msg="CreateContainer within sandbox \"825ac842b82bd4e22e42b7c3ba125cd9fc4d0f35f1e4c9c2528eb315eaeb535a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c6d25411a730fcc82dbc514d21dbce797d6d34d14f22474f0daa2d421c730e0e\"" Jul 11 00:30:49.278108 containerd[1580]: time="2025-07-11T00:30:49.278073054Z" level=info msg="StartContainer for \"c6d25411a730fcc82dbc514d21dbce797d6d34d14f22474f0daa2d421c730e0e\"" Jul 11 00:30:49.442847 containerd[1580]: time="2025-07-11T00:30:49.442793992Z" level=info msg="StartContainer for \"c6d25411a730fcc82dbc514d21dbce797d6d34d14f22474f0daa2d421c730e0e\" returns successfully" Jul 11 00:30:50.785180 systemd-resolved[1473]: Under memory pressure, flushing caches. Jul 11 00:30:50.787262 systemd-journald[1160]: Under memory pressure, flushing caches. Jul 11 00:30:50.785207 systemd-resolved[1473]: Flushed all caches. Jul 11 00:30:50.895051 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:39858.service - OpenSSH per-connection server daemon (10.0.0.1:39858). Jul 11 00:30:50.963974 sshd[6216]: Accepted publickey for core from 10.0.0.1 port 39858 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:30:50.966922 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:30:50.972160 systemd-logind[1559]: New session 17 of user core. Jul 11 00:30:50.983010 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:30:51.574524 sshd[6216]: pam_unix(sshd:session): session closed for user core Jul 11 00:30:51.579846 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:39858.service: Deactivated successfully. Jul 11 00:30:51.583274 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:30:51.583353 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:30:51.584548 systemd-logind[1559]: Removed session 17. Jul 11 00:30:52.831076 kubelet[2768]: E0711 00:30:52.830975 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:52.833522 systemd-resolved[1473]: Under memory pressure, flushing caches. Jul 11 00:30:52.833560 systemd-resolved[1473]: Flushed all caches. Jul 11 00:30:52.835443 systemd-journald[1160]: Under memory pressure, flushing caches. Jul 11 00:30:54.071094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159335829.mount: Deactivated successfully. Jul 11 00:30:55.108446 containerd[1580]: time="2025-07-11T00:30:55.108363790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:55.110146 containerd[1580]: time="2025-07-11T00:30:55.110026362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 00:30:55.113113 containerd[1580]: time="2025-07-11T00:30:55.113022475Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:55.145544 containerd[1580]: time="2025-07-11T00:30:55.145463604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:30:55.156779 containerd[1580]: time="2025-07-11T00:30:55.156531792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.908405979s" Jul 11 00:30:55.156779 containerd[1580]: time="2025-07-11T00:30:55.156594912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 00:30:55.161338 containerd[1580]: time="2025-07-11T00:30:55.158459023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:30:55.168061 containerd[1580]: time="2025-07-11T00:30:55.167999333Z" level=info msg="CreateContainer within sandbox \"d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:30:55.204500 containerd[1580]: time="2025-07-11T00:30:55.204426570Z" level=info msg="CreateContainer within sandbox \"d95078762e181d0e801244db10a263a0a7de8aba1f9d27a36e59edce855dbc83\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8ad465caa188092ee8fb0e74f1d97fb3c5d689dbfd51c68ca8abb456c915ebb6\"" Jul 11 00:30:55.205332 containerd[1580]: time="2025-07-11T00:30:55.205282341Z" level=info msg="StartContainer for \"8ad465caa188092ee8fb0e74f1d97fb3c5d689dbfd51c68ca8abb456c915ebb6\"" Jul 11 00:30:55.329793 containerd[1580]: time="2025-07-11T00:30:55.329732964Z" level=info msg="StartContainer for \"8ad465caa188092ee8fb0e74f1d97fb3c5d689dbfd51c68ca8abb456c915ebb6\" returns successfully" Jul 11 00:30:55.849123 kubelet[2768]: I0711 00:30:55.849042 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-c5b9c6488-2x65p" podStartSLOduration=5.089872516 podStartE2EDuration="38.845044463s" podCreationTimestamp="2025-07-11 00:30:17 +0000 UTC" firstStartedPulling="2025-07-11 00:30:21.402358176 +0000 UTC m=+52.707848321" lastFinishedPulling="2025-07-11 00:30:55.157530123 +0000 UTC m=+86.463020268" observedRunningTime="2025-07-11 00:30:55.84478772 +0000 UTC m=+87.150277865" watchObservedRunningTime="2025-07-11 00:30:55.845044463 +0000 UTC m=+87.150534618" Jul 11 00:30:55.849879 kubelet[2768]: I0711 00:30:55.849381 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-lld94" podStartSLOduration=44.769382652 podStartE2EDuration="1m10.849367877s" podCreationTimestamp="2025-07-11 00:29:45 +0000 UTC" firstStartedPulling="2025-07-11 00:30:23.166722136 +0000 UTC m=+54.472212281" lastFinishedPulling="2025-07-11 00:30:49.246707361 +0000 UTC m=+80.552197506" observedRunningTime="2025-07-11 00:30:49.755577902 +0000 UTC m=+81.061068047" watchObservedRunningTime="2025-07-11 00:30:55.849367877 +0000 UTC m=+87.154858022" Jul 11 00:30:56.587410 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:57778.service - OpenSSH per-connection server daemon (10.0.0.1:57778). Jul 11 00:30:56.700470 sshd[6287]: Accepted publickey for core from 10.0.0.1 port 57778 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:30:56.702287 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:30:56.706738 systemd-logind[1559]: New session 18 of user core. Jul 11 00:30:56.716003 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:30:57.876587 kubelet[2768]: E0711 00:30:57.876460 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:30:57.899469 sshd[6287]: pam_unix(sshd:session): session closed for user core Jul 11 00:30:57.904877 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:57778.service: Deactivated successfully. Jul 11 00:30:57.908473 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:30:57.908591 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:30:57.909597 systemd-logind[1559]: Removed session 18. Jul 11 00:30:58.785127 systemd-resolved[1473]: Under memory pressure, flushing caches. Jul 11 00:30:58.786945 systemd-journald[1160]: Under memory pressure, flushing caches. Jul 11 00:30:58.785165 systemd-resolved[1473]: Flushed all caches. Jul 11 00:31:02.152558 containerd[1580]: time="2025-07-11T00:31:02.152438810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:31:02.154742 containerd[1580]: time="2025-07-11T00:31:02.154647639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 00:31:02.175871 containerd[1580]: time="2025-07-11T00:31:02.175783506Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:31:02.179935 containerd[1580]: time="2025-07-11T00:31:02.179857027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:31:02.180670 containerd[1580]: time="2025-07-11T00:31:02.180615606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 7.022117679s" Jul 11 00:31:02.180670 containerd[1580]: time="2025-07-11T00:31:02.180663796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 00:31:02.196657 containerd[1580]: time="2025-07-11T00:31:02.196568419Z" level=info msg="CreateContainer within sandbox \"7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:31:02.235862 containerd[1580]: time="2025-07-11T00:31:02.235201899Z" level=info msg="CreateContainer within sandbox \"7d1b8646f67133386f2d8af5f9167acffcd528c80aa926f7300d2ae9b6f7b538\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b0e950f637a7e39032b29461e3b02debac6fb5328d2610be5e141e94b377713a\"" Jul 11 00:31:02.236487 containerd[1580]: time="2025-07-11T00:31:02.236433819Z" level=info msg="StartContainer for \"b0e950f637a7e39032b29461e3b02debac6fb5328d2610be5e141e94b377713a\"" Jul 11 00:31:02.328248 containerd[1580]: time="2025-07-11T00:31:02.328196366Z" level=info msg="StartContainer for \"b0e950f637a7e39032b29461e3b02debac6fb5328d2610be5e141e94b377713a\" returns successfully" Jul 11 00:31:02.683884 kubelet[2768]: I0711 00:31:02.683793 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pd2rq" podStartSLOduration=37.471568144 podStartE2EDuration="1m16.683771285s" podCreationTimestamp="2025-07-11 00:29:46 +0000 UTC" firstStartedPulling="2025-07-11 00:30:22.969386219 +0000 UTC m=+54.274876364" lastFinishedPulling="2025-07-11 00:31:02.18158937 +0000 UTC m=+93.487079505" observedRunningTime="2025-07-11 00:31:02.682582325 +0000 UTC m=+93.988072470" watchObservedRunningTime="2025-07-11 00:31:02.683771285 +0000 UTC m=+93.989261440" Jul 11 00:31:02.911060 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:57782.service - OpenSSH per-connection server daemon (10.0.0.1:57782). Jul 11 00:31:02.978744 sshd[6373]: Accepted publickey for core from 10.0.0.1 port 57782 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:02.981881 sshd[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:02.993482 systemd-logind[1559]: New session 19 of user core. Jul 11 00:31:02.999087 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:31:03.218861 kubelet[2768]: I0711 00:31:03.218801 2768 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:31:03.226279 kubelet[2768]: I0711 00:31:03.226233 2768 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:31:03.231330 sshd[6373]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:03.236771 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:57782.service: Deactivated successfully. Jul 11 00:31:03.240784 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:31:03.241977 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:31:03.243223 systemd-logind[1559]: Removed session 19. Jul 11 00:31:04.212174 kubelet[2768]: I0711 00:31:04.212115 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:31:05.683486 systemd[1]: run-containerd-runc-k8s.io-c6d25411a730fcc82dbc514d21dbce797d6d34d14f22474f0daa2d421c730e0e-runc.wmMX5P.mount: Deactivated successfully. Jul 11 00:31:08.246133 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:52548.service - OpenSSH per-connection server daemon (10.0.0.1:52548). Jul 11 00:31:08.301452 sshd[6461]: Accepted publickey for core from 10.0.0.1 port 52548 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:08.304059 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:08.309532 systemd-logind[1559]: New session 20 of user core. Jul 11 00:31:08.320756 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:31:08.642143 sshd[6461]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:08.647184 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:52548.service: Deactivated successfully. Jul 11 00:31:08.649983 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:31:08.650033 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:31:08.651776 systemd-logind[1559]: Removed session 20. Jul 11 00:31:08.828776 kubelet[2768]: E0711 00:31:08.828732 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:13.650084 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:52556.service - OpenSSH per-connection server daemon (10.0.0.1:52556). Jul 11 00:31:14.327734 sshd[6499]: Accepted publickey for core from 10.0.0.1 port 52556 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:14.329552 sshd[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:14.333583 systemd-logind[1559]: New session 21 of user core. Jul 11 00:31:14.341933 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:31:14.720798 systemd-resolved[1473]: Under memory pressure, flushing caches. Jul 11 00:31:14.751799 systemd-journald[1160]: Under memory pressure, flushing caches. Jul 11 00:31:14.720806 systemd-resolved[1473]: Flushed all caches. Jul 11 00:31:14.995125 sshd[6499]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:15.000084 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:52556.service: Deactivated successfully. Jul 11 00:31:15.004561 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:31:15.005081 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:31:15.007479 systemd-logind[1559]: Removed session 21. Jul 11 00:31:20.006951 systemd[1]: Started sshd@21-10.0.0.133:22-10.0.0.1:44496.service - OpenSSH per-connection server daemon (10.0.0.1:44496). Jul 11 00:31:20.039422 sshd[6516]: Accepted publickey for core from 10.0.0.1 port 44496 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:20.041212 sshd[6516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:20.045716 systemd-logind[1559]: New session 22 of user core. Jul 11 00:31:20.056993 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:31:20.314578 sshd[6516]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:20.323016 systemd[1]: Started sshd@22-10.0.0.133:22-10.0.0.1:44500.service - OpenSSH per-connection server daemon (10.0.0.1:44500). Jul 11 00:31:20.323928 systemd[1]: sshd@21-10.0.0.133:22-10.0.0.1:44496.service: Deactivated successfully. Jul 11 00:31:20.328150 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:31:20.329791 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:31:20.333157 systemd-logind[1559]: Removed session 22. Jul 11 00:31:20.364933 sshd[6529]: Accepted publickey for core from 10.0.0.1 port 44500 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:20.366701 sshd[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:20.371539 systemd-logind[1559]: New session 23 of user core. Jul 11 00:31:20.377148 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:31:20.760793 sshd[6529]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:20.770090 systemd[1]: Started sshd@23-10.0.0.133:22-10.0.0.1:44516.service - OpenSSH per-connection server daemon (10.0.0.1:44516). Jul 11 00:31:20.770914 systemd[1]: sshd@22-10.0.0.133:22-10.0.0.1:44500.service: Deactivated successfully. Jul 11 00:31:20.779580 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:31:20.781715 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:31:20.783236 systemd-logind[1559]: Removed session 23. Jul 11 00:31:20.824026 sshd[6542]: Accepted publickey for core from 10.0.0.1 port 44516 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:20.826789 sshd[6542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:20.833748 systemd-logind[1559]: New session 24 of user core. Jul 11 00:31:20.840207 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:31:23.343819 sshd[6542]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:23.356930 systemd[1]: Started sshd@24-10.0.0.133:22-10.0.0.1:44532.service - OpenSSH per-connection server daemon (10.0.0.1:44532). Jul 11 00:31:23.357523 systemd[1]: sshd@23-10.0.0.133:22-10.0.0.1:44516.service: Deactivated successfully. Jul 11 00:31:23.365741 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:31:23.369905 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:31:23.373246 systemd-logind[1559]: Removed session 24. Jul 11 00:31:23.418714 sshd[6559]: Accepted publickey for core from 10.0.0.1 port 44532 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:23.420663 sshd[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:23.425601 systemd-logind[1559]: New session 25 of user core. Jul 11 00:31:23.435971 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:31:24.224221 sshd[6559]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:24.239106 systemd[1]: Started sshd@25-10.0.0.133:22-10.0.0.1:44548.service - OpenSSH per-connection server daemon (10.0.0.1:44548). Jul 11 00:31:24.241872 systemd[1]: sshd@24-10.0.0.133:22-10.0.0.1:44532.service: Deactivated successfully. Jul 11 00:31:24.245518 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:31:24.250254 systemd-logind[1559]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:31:24.253583 systemd-logind[1559]: Removed session 25. Jul 11 00:31:24.282696 sshd[6575]: Accepted publickey for core from 10.0.0.1 port 44548 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:24.284859 sshd[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:24.290812 systemd-logind[1559]: New session 26 of user core. Jul 11 00:31:24.299382 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 00:31:24.472669 sshd[6575]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:24.480704 systemd[1]: sshd@25-10.0.0.133:22-10.0.0.1:44548.service: Deactivated successfully. Jul 11 00:31:24.485121 systemd-logind[1559]: Session 26 logged out. Waiting for processes to exit. Jul 11 00:31:24.485258 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 00:31:24.487436 systemd-logind[1559]: Removed session 26. Jul 11 00:31:29.490128 systemd[1]: Started sshd@26-10.0.0.133:22-10.0.0.1:41740.service - OpenSSH per-connection server daemon (10.0.0.1:41740). Jul 11 00:31:29.606078 sshd[6595]: Accepted publickey for core from 10.0.0.1 port 41740 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:29.607728 sshd[6595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:29.611873 systemd-logind[1559]: New session 27 of user core. Jul 11 00:31:29.620052 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 00:31:29.800055 sshd[6595]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:29.806932 systemd[1]: sshd@26-10.0.0.133:22-10.0.0.1:41740.service: Deactivated successfully. Jul 11 00:31:29.813308 systemd-logind[1559]: Session 27 logged out. Waiting for processes to exit. Jul 11 00:31:29.813407 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 00:31:29.814866 systemd-logind[1559]: Removed session 27. Jul 11 00:31:34.816529 systemd[1]: Started sshd@27-10.0.0.133:22-10.0.0.1:41754.service - OpenSSH per-connection server daemon (10.0.0.1:41754). Jul 11 00:31:34.858091 sshd[6613]: Accepted publickey for core from 10.0.0.1 port 41754 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:34.860380 sshd[6613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:34.867890 systemd-logind[1559]: New session 28 of user core. Jul 11 00:31:34.876486 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 11 00:31:35.042369 sshd[6613]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:35.048829 systemd-logind[1559]: Session 28 logged out. Waiting for processes to exit. Jul 11 00:31:35.049792 systemd[1]: sshd@27-10.0.0.133:22-10.0.0.1:41754.service: Deactivated successfully. Jul 11 00:31:35.054567 systemd[1]: session-28.scope: Deactivated successfully. Jul 11 00:31:35.056492 systemd-logind[1559]: Removed session 28. Jul 11 00:31:37.715772 systemd[1]: run-containerd-runc-k8s.io-a581d64f8da84e30444877c2ec58c05fb824f7d1eaf60b99e7abe35f9cd2d78f-runc.HMG8pc.mount: Deactivated successfully. Jul 11 00:31:40.050978 systemd[1]: Started sshd@28-10.0.0.133:22-10.0.0.1:45078.service - OpenSSH per-connection server daemon (10.0.0.1:45078). Jul 11 00:31:40.094436 sshd[6698]: Accepted publickey for core from 10.0.0.1 port 45078 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:40.094334 sshd[6698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:40.101425 systemd-logind[1559]: New session 29 of user core. Jul 11 00:31:40.109966 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 11 00:31:40.327152 sshd[6698]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:40.334063 systemd[1]: sshd@28-10.0.0.133:22-10.0.0.1:45078.service: Deactivated successfully. Jul 11 00:31:40.336376 systemd[1]: session-29.scope: Deactivated successfully. Jul 11 00:31:40.337286 systemd-logind[1559]: Session 29 logged out. Waiting for processes to exit. Jul 11 00:31:40.338409 systemd-logind[1559]: Removed session 29. Jul 11 00:31:40.766948 update_engine[1568]: I20250711 00:31:40.766846 1568 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 11 00:31:40.766948 update_engine[1568]: I20250711 00:31:40.766933 1568 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 11 00:31:40.768040 update_engine[1568]: I20250711 00:31:40.768005 1568 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 11 00:31:40.769074 update_engine[1568]: I20250711 00:31:40.769017 1568 omaha_request_params.cc:62] Current group set to lts Jul 11 00:31:40.769248 update_engine[1568]: I20250711 00:31:40.769219 1568 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 11 00:31:40.769248 update_engine[1568]: I20250711 00:31:40.769237 1568 update_attempter.cc:643] Scheduling an action processor start. Jul 11 00:31:40.769354 update_engine[1568]: I20250711 00:31:40.769266 1568 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 11 00:31:40.769354 update_engine[1568]: I20250711 00:31:40.769327 1568 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 11 00:31:40.769477 update_engine[1568]: I20250711 00:31:40.769407 1568 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 11 00:31:40.769477 update_engine[1568]: I20250711 00:31:40.769419 1568 omaha_request_action.cc:272] Request: Jul 11 00:31:40.769477 update_engine[1568]: Jul 11 00:31:40.769477 update_engine[1568]: Jul 11 00:31:40.769477 update_engine[1568]: Jul 11 00:31:40.769477 update_engine[1568]: Jul 11 00:31:40.769477 update_engine[1568]: Jul 11 00:31:40.769477 update_engine[1568]: Jul 11 00:31:40.769477 update_engine[1568]: Jul 11 00:31:40.769477 update_engine[1568]: Jul 11 00:31:40.769477 update_engine[1568]: I20250711 00:31:40.769429 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 00:31:40.778469 update_engine[1568]: I20250711 00:31:40.778415 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 00:31:40.778836 update_engine[1568]: I20250711 00:31:40.778771 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 00:31:40.781599 locksmithd[1606]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 11 00:31:40.786204 update_engine[1568]: E20250711 00:31:40.786167 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 00:31:40.786249 update_engine[1568]: I20250711 00:31:40.786237 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 11 00:31:44.828826 kubelet[2768]: E0711 00:31:44.828784 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:45.339954 systemd[1]: Started sshd@29-10.0.0.133:22-10.0.0.1:45094.service - OpenSSH per-connection server daemon (10.0.0.1:45094). Jul 11 00:31:45.374915 sshd[6722]: Accepted publickey for core from 10.0.0.1 port 45094 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:45.376655 sshd[6722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:45.381215 systemd-logind[1559]: New session 30 of user core. Jul 11 00:31:45.388062 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 11 00:31:45.527990 sshd[6722]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:45.532546 systemd[1]: sshd@29-10.0.0.133:22-10.0.0.1:45094.service: Deactivated successfully. Jul 11 00:31:45.535420 systemd-logind[1559]: Session 30 logged out. Waiting for processes to exit. Jul 11 00:31:45.535725 systemd[1]: session-30.scope: Deactivated successfully. Jul 11 00:31:45.536644 systemd-logind[1559]: Removed session 30. Jul 11 00:31:45.828353 kubelet[2768]: E0711 00:31:45.828306 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:31:50.539007 systemd[1]: Started sshd@30-10.0.0.133:22-10.0.0.1:49404.service - OpenSSH per-connection server daemon (10.0.0.1:49404). Jul 11 00:31:50.633717 sshd[6760]: Accepted publickey for core from 10.0.0.1 port 49404 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:31:50.633432 sshd[6760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:31:50.638933 systemd-logind[1559]: New session 31 of user core. Jul 11 00:31:50.643395 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 11 00:31:50.742074 update_engine[1568]: I20250711 00:31:50.741977 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 11 00:31:50.742763 update_engine[1568]: I20250711 00:31:50.742314 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 11 00:31:50.742763 update_engine[1568]: I20250711 00:31:50.742553 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 11 00:31:50.749688 update_engine[1568]: E20250711 00:31:50.749603 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 11 00:31:50.749867 update_engine[1568]: I20250711 00:31:50.749725 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 11 00:31:50.889518 sshd[6760]: pam_unix(sshd:session): session closed for user core Jul 11 00:31:50.892962 systemd[1]: sshd@30-10.0.0.133:22-10.0.0.1:49404.service: Deactivated successfully. Jul 11 00:31:50.898289 systemd[1]: session-31.scope: Deactivated successfully. Jul 11 00:31:50.900262 systemd-logind[1559]: Session 31 logged out. Waiting for processes to exit. Jul 11 00:31:50.901366 systemd-logind[1559]: Removed session 31.