Jul 11 00:19:23.972901 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Jul 10 22:46:23 -00 2025 Jul 11 00:19:23.972928 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:19:23.972942 kernel: BIOS-provided physical RAM map: Jul 11 00:19:23.972950 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 11 00:19:23.972957 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 11 00:19:23.972965 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 11 00:19:23.972974 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 11 00:19:23.972982 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 11 00:19:23.972989 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 11 00:19:23.972997 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 11 00:19:23.973008 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 11 00:19:23.973016 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 11 00:19:23.973028 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 11 00:19:23.973036 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 11 00:19:23.973048 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 11 00:19:23.973057 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 11 00:19:23.973069 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 11 00:19:23.973077 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 11 00:19:23.973085 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 11 00:19:23.973094 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 11 00:19:23.973102 kernel: NX (Execute Disable) protection: active Jul 11 00:19:23.973110 kernel: APIC: Static calls initialized Jul 11 00:19:23.973119 kernel: efi: EFI v2.7 by EDK II Jul 11 00:19:23.973127 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jul 11 00:19:23.973135 kernel: SMBIOS 2.8 present. Jul 11 00:19:23.973144 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 11 00:19:23.973152 kernel: Hypervisor detected: KVM Jul 11 00:19:23.973163 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 11 00:19:23.973171 kernel: kvm-clock: using sched offset of 5405591488 cycles Jul 11 00:19:23.973180 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 11 00:19:23.973189 kernel: tsc: Detected 2794.746 MHz processor Jul 11 00:19:23.973198 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 11 00:19:23.973207 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 11 00:19:23.973215 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 11 00:19:23.973224 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 11 00:19:23.973233 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 11 00:19:23.973262 kernel: Using GB pages for direct mapping Jul 11 00:19:23.973271 kernel: Secure boot disabled Jul 11 00:19:23.973279 kernel: ACPI: Early table checksum verification disabled Jul 11 00:19:23.973288 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 11 00:19:23.973301 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:19:23.973310 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:19:23.973320 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:19:23.973332 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 11 00:19:23.973341 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:19:23.973353 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:19:23.973362 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:19:23.973371 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:19:23.973380 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 11 00:19:23.973389 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 11 00:19:23.973401 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 11 00:19:23.973410 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 11 00:19:23.973419 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 11 00:19:23.973428 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 11 00:19:23.973437 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 11 00:19:23.973446 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 11 00:19:23.973455 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 11 00:19:23.973464 kernel: No NUMA configuration found Jul 11 00:19:23.973476 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 11 00:19:23.973488 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 11 00:19:23.973497 kernel: Zone ranges: Jul 11 00:19:23.973506 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 11 00:19:23.973515 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 11 00:19:23.973524 kernel: Normal empty Jul 11 00:19:23.973533 kernel: Movable zone start for each node Jul 11 00:19:23.973542 kernel: Early memory node ranges Jul 11 00:19:23.973551 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 11 00:19:23.973560 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 11 00:19:23.973569 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 11 00:19:23.973581 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 11 00:19:23.973590 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 11 00:19:23.973599 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 11 00:19:23.973611 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 11 00:19:23.973621 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:19:23.973631 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 11 00:19:23.973653 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 11 00:19:23.973662 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 11 00:19:23.973672 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 11 00:19:23.973687 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 11 00:19:23.973697 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 11 00:19:23.973707 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 11 00:19:23.973718 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 11 00:19:23.973728 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 11 00:19:23.973739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 11 00:19:23.973748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 11 00:19:23.973758 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 11 00:19:23.973768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 11 00:19:23.973782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 11 00:19:23.973792 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 11 00:19:23.973802 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 11 00:19:23.973812 kernel: TSC deadline timer available Jul 11 00:19:23.973822 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 11 00:19:23.973833 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 11 00:19:23.973842 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 11 00:19:23.973852 kernel: kvm-guest: setup PV sched yield Jul 11 00:19:23.973862 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 11 00:19:23.973884 kernel: Booting paravirtualized kernel on KVM Jul 11 00:19:23.973900 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 11 00:19:23.973911 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 11 00:19:23.973922 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 11 00:19:23.973933 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 11 00:19:23.973944 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 11 00:19:23.973954 kernel: kvm-guest: PV spinlocks enabled Jul 11 00:19:23.973964 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 11 00:19:23.973977 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:19:23.973995 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:19:23.974006 kernel: random: crng init done Jul 11 00:19:23.974017 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:19:23.974028 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:19:23.974038 kernel: Fallback order for Node 0: 0 Jul 11 00:19:23.974049 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 11 00:19:23.974059 kernel: Policy zone: DMA32 Jul 11 00:19:23.974070 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:19:23.974081 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22744K rodata, 42872K init, 2320K bss, 166140K reserved, 0K cma-reserved) Jul 11 00:19:23.974095 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:19:23.974106 kernel: ftrace: allocating 37966 entries in 149 pages Jul 11 00:19:23.974117 kernel: ftrace: allocated 149 pages with 4 groups Jul 11 00:19:23.974128 kernel: Dynamic Preempt: voluntary Jul 11 00:19:23.974149 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:19:23.974164 kernel: rcu: RCU event tracing is enabled. Jul 11 00:19:23.974176 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:19:23.974187 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:19:23.974198 kernel: Rude variant of Tasks RCU enabled. Jul 11 00:19:23.974209 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:19:23.974220 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:19:23.974231 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:19:23.974275 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 11 00:19:23.974290 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:19:23.974302 kernel: Console: colour dummy device 80x25 Jul 11 00:19:23.974313 kernel: printk: console [ttyS0] enabled Jul 11 00:19:23.974324 kernel: ACPI: Core revision 20230628 Jul 11 00:19:23.974339 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 11 00:19:23.974350 kernel: APIC: Switch to symmetric I/O mode setup Jul 11 00:19:23.974360 kernel: x2apic enabled Jul 11 00:19:23.974371 kernel: APIC: Switched APIC routing to: physical x2apic Jul 11 00:19:23.974382 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 11 00:19:23.974394 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 11 00:19:23.974406 kernel: kvm-guest: setup PV IPIs Jul 11 00:19:23.974417 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 11 00:19:23.974428 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 11 00:19:23.974443 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jul 11 00:19:23.974454 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 11 00:19:23.974465 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 11 00:19:23.974477 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 11 00:19:23.974488 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 11 00:19:23.974499 kernel: Spectre V2 : Mitigation: Retpolines Jul 11 00:19:23.974511 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 11 00:19:23.974523 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 11 00:19:23.974537 kernel: RETBleed: Mitigation: untrained return thunk Jul 11 00:19:23.974549 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 11 00:19:23.974561 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 11 00:19:23.974572 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 11 00:19:23.974587 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 11 00:19:23.974599 kernel: x86/bugs: return thunk changed Jul 11 00:19:23.974610 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 11 00:19:23.974621 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 11 00:19:23.974633 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 11 00:19:23.974647 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 11 00:19:23.974659 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 11 00:19:23.974670 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 11 00:19:23.974681 kernel: Freeing SMP alternatives memory: 32K Jul 11 00:19:23.974692 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:19:23.974704 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:19:23.974715 kernel: landlock: Up and running. Jul 11 00:19:23.974726 kernel: SELinux: Initializing. Jul 11 00:19:23.974737 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:19:23.974752 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:19:23.974763 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 11 00:19:23.974774 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:19:23.974786 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:19:23.974797 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:19:23.974808 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 11 00:19:23.974819 kernel: ... version: 0 Jul 11 00:19:23.974831 kernel: ... bit width: 48 Jul 11 00:19:23.974845 kernel: ... generic registers: 6 Jul 11 00:19:23.974856 kernel: ... value mask: 0000ffffffffffff Jul 11 00:19:23.974867 kernel: ... max period: 00007fffffffffff Jul 11 00:19:23.974888 kernel: ... fixed-purpose events: 0 Jul 11 00:19:23.974899 kernel: ... event mask: 000000000000003f Jul 11 00:19:23.974910 kernel: signal: max sigframe size: 1776 Jul 11 00:19:23.974922 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:19:23.974933 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:19:23.974944 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:19:23.974956 kernel: smpboot: x86: Booting SMP configuration: Jul 11 00:19:23.974970 kernel: .... node #0, CPUs: #1 #2 #3 Jul 11 00:19:23.974982 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:19:23.974993 kernel: smpboot: Max logical packages: 1 Jul 11 00:19:23.975004 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jul 11 00:19:23.975015 kernel: devtmpfs: initialized Jul 11 00:19:23.975026 kernel: x86/mm: Memory block size: 128MB Jul 11 00:19:23.975037 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 11 00:19:23.975048 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 11 00:19:23.975059 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 11 00:19:23.975074 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 11 00:19:23.975085 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 11 00:19:23.975095 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:19:23.975105 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:19:23.975115 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:19:23.975125 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:19:23.975136 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:19:23.975147 kernel: audit: type=2000 audit(1752193163.060:1): state=initialized audit_enabled=0 res=1 Jul 11 00:19:23.975157 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:19:23.975170 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 11 00:19:23.975181 kernel: cpuidle: using governor menu Jul 11 00:19:23.975192 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:19:23.975207 kernel: dca service started, version 1.12.1 Jul 11 00:19:23.975256 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 11 00:19:23.975269 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 11 00:19:23.975297 kernel: PCI: Using configuration type 1 for base access Jul 11 00:19:23.975308 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 11 00:19:23.975319 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:19:23.975335 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:19:23.975346 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:19:23.975357 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:19:23.975368 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:19:23.975378 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:19:23.975389 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:19:23.975406 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:19:23.975417 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 11 00:19:23.975427 kernel: ACPI: Interpreter enabled Jul 11 00:19:23.975443 kernel: ACPI: PM: (supports S0 S3 S5) Jul 11 00:19:23.975454 kernel: ACPI: Using IOAPIC for interrupt routing Jul 11 00:19:23.975466 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 11 00:19:23.975476 kernel: PCI: Using E820 reservations for host bridge windows Jul 11 00:19:23.975488 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 11 00:19:23.975499 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:19:23.975775 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:19:23.975963 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 11 00:19:23.976131 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 11 00:19:23.976145 kernel: PCI host bridge to bus 0000:00 Jul 11 00:19:23.976366 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 11 00:19:23.977329 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 11 00:19:23.977502 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 11 00:19:23.977666 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 11 00:19:23.977837 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 11 00:19:23.978014 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 11 00:19:23.978179 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:19:23.978386 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 11 00:19:23.978552 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 11 00:19:23.978696 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 11 00:19:23.978836 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 11 00:19:23.979028 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 11 00:19:23.979185 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 11 00:19:23.979371 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 11 00:19:23.979567 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:19:23.979734 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 11 00:19:23.979939 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 11 00:19:23.980105 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 11 00:19:23.980324 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 11 00:19:23.980494 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 11 00:19:23.980661 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 11 00:19:23.980826 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 11 00:19:23.981100 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 11 00:19:23.981289 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 11 00:19:23.981465 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 11 00:19:23.981630 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 11 00:19:23.981795 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 11 00:19:23.982019 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 11 00:19:23.982188 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 11 00:19:23.983568 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 11 00:19:23.983742 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 11 00:19:23.983951 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 11 00:19:23.984139 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 11 00:19:23.984343 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 11 00:19:23.984360 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 11 00:19:23.984372 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 11 00:19:23.984384 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 11 00:19:23.984395 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 11 00:19:23.984407 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 11 00:19:23.984424 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 11 00:19:23.984436 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 11 00:19:23.984448 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 11 00:19:23.984459 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 11 00:19:23.984471 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 11 00:19:23.984482 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 11 00:19:23.984493 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 11 00:19:23.984505 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 11 00:19:23.984516 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 11 00:19:23.984531 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 11 00:19:23.984543 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 11 00:19:23.984554 kernel: iommu: Default domain type: Translated Jul 11 00:19:23.984566 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 11 00:19:23.984577 kernel: efivars: Registered efivars operations Jul 11 00:19:23.984588 kernel: PCI: Using ACPI for IRQ routing Jul 11 00:19:23.984599 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 11 00:19:23.984611 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 11 00:19:23.984622 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 11 00:19:23.984637 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 11 00:19:23.984649 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 11 00:19:23.984813 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 11 00:19:23.985013 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 11 00:19:23.985196 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 11 00:19:23.985213 kernel: vgaarb: loaded Jul 11 00:19:23.985225 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 11 00:19:23.985259 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 11 00:19:23.985271 kernel: clocksource: Switched to clocksource kvm-clock Jul 11 00:19:23.985289 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:19:23.985301 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:19:23.985313 kernel: pnp: PnP ACPI init Jul 11 00:19:23.985517 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 11 00:19:23.985535 kernel: pnp: PnP ACPI: found 6 devices Jul 11 00:19:23.985547 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 11 00:19:23.985559 kernel: NET: Registered PF_INET protocol family Jul 11 00:19:23.985571 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:19:23.985587 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:19:23.985599 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:19:23.985611 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:19:23.985622 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:19:23.985634 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:19:23.985645 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:19:23.985657 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:19:23.985668 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:19:23.985683 kernel: NET: Registered PF_XDP protocol family Jul 11 00:19:23.985850 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 11 00:19:23.986032 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 11 00:19:23.986183 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 11 00:19:23.986352 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 11 00:19:23.986501 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 11 00:19:23.986648 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 11 00:19:23.986795 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 11 00:19:23.986965 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 11 00:19:23.986982 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:19:23.986994 kernel: Initialise system trusted keyrings Jul 11 00:19:23.987005 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:19:23.987016 kernel: Key type asymmetric registered Jul 11 00:19:23.987027 kernel: Asymmetric key parser 'x509' registered Jul 11 00:19:23.987039 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 11 00:19:23.987050 kernel: io scheduler mq-deadline registered Jul 11 00:19:23.987062 kernel: io scheduler kyber registered Jul 11 00:19:23.987078 kernel: io scheduler bfq registered Jul 11 00:19:23.987090 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 11 00:19:23.987102 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 11 00:19:23.987113 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 11 00:19:23.987125 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 11 00:19:23.987136 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:19:23.987148 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 11 00:19:23.987160 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 11 00:19:23.987171 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 11 00:19:23.987186 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 11 00:19:23.987198 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 11 00:19:23.987556 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 11 00:19:23.987714 kernel: rtc_cmos 00:04: registered as rtc0 Jul 11 00:19:23.987867 kernel: rtc_cmos 00:04: setting system clock to 2025-07-11T00:19:23 UTC (1752193163) Jul 11 00:19:23.988033 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 11 00:19:23.988049 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 11 00:19:23.988061 kernel: efifb: probing for efifb Jul 11 00:19:23.988079 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 11 00:19:23.988090 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 11 00:19:23.988101 kernel: efifb: scrolling: redraw Jul 11 00:19:23.988113 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 11 00:19:23.988124 kernel: Console: switching to colour frame buffer device 100x37 Jul 11 00:19:23.988136 kernel: fb0: EFI VGA frame buffer device Jul 11 00:19:23.988173 kernel: pstore: Using crash dump compression: deflate Jul 11 00:19:23.988188 kernel: pstore: Registered efi_pstore as persistent store backend Jul 11 00:19:23.988200 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:19:23.988215 kernel: Segment Routing with IPv6 Jul 11 00:19:23.988227 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:19:23.988253 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:19:23.988266 kernel: Key type dns_resolver registered Jul 11 00:19:23.988278 kernel: IPI shorthand broadcast: enabled Jul 11 00:19:23.988290 kernel: sched_clock: Marking stable (1048002651, 157968560)->(1259726436, -53755225) Jul 11 00:19:23.988302 kernel: registered taskstats version 1 Jul 11 00:19:23.988314 kernel: Loading compiled-in X.509 certificates Jul 11 00:19:23.988326 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 5956f0842928c96096c398e9db55919cd236a39f' Jul 11 00:19:23.988342 kernel: Key type .fscrypt registered Jul 11 00:19:23.988354 kernel: Key type fscrypt-provisioning registered Jul 11 00:19:23.988366 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:19:23.988381 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:19:23.988393 kernel: ima: No architecture policies found Jul 11 00:19:23.988405 kernel: clk: Disabling unused clocks Jul 11 00:19:23.988417 kernel: Freeing unused kernel image (initmem) memory: 42872K Jul 11 00:19:23.988428 kernel: Write protecting the kernel read-only data: 36864k Jul 11 00:19:23.988441 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Jul 11 00:19:23.988455 kernel: Run /init as init process Jul 11 00:19:23.988467 kernel: with arguments: Jul 11 00:19:23.988479 kernel: /init Jul 11 00:19:23.988491 kernel: with environment: Jul 11 00:19:23.988502 kernel: HOME=/ Jul 11 00:19:23.988514 kernel: TERM=linux Jul 11 00:19:23.988526 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:19:23.988540 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:19:23.988559 systemd[1]: Detected virtualization kvm. Jul 11 00:19:23.988571 systemd[1]: Detected architecture x86-64. Jul 11 00:19:23.988583 systemd[1]: Running in initrd. Jul 11 00:19:23.988596 systemd[1]: No hostname configured, using default hostname. Jul 11 00:19:23.988608 systemd[1]: Hostname set to . Jul 11 00:19:23.988627 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:19:23.988639 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:19:23.988652 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:19:23.988665 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:19:23.988678 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:19:23.988690 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:19:23.988702 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:19:23.988714 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:19:23.988735 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:19:23.988752 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:19:23.988766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:19:23.988783 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:19:23.988798 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:19:23.988817 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:19:23.988833 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:19:23.988846 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:19:23.988858 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:19:23.988871 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:19:23.988894 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:19:23.988906 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:19:23.988919 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:19:23.988931 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:19:23.988944 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:19:23.988960 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:19:23.988972 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:19:23.988984 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:19:23.988997 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:19:23.989009 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:19:23.989021 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:19:23.989034 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:19:23.989046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:19:23.989059 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:19:23.989074 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:19:23.989087 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:19:23.989126 systemd-journald[193]: Collecting audit messages is disabled. Jul 11 00:19:23.989159 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:19:23.989172 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:19:23.989184 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:19:23.989197 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:19:23.989210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:19:23.989225 systemd-journald[193]: Journal started Jul 11 00:19:23.989264 systemd-journald[193]: Runtime Journal (/run/log/journal/b3763c240d1a4772948f3594dbdd2d90) is 6.0M, max 48.3M, 42.2M free. Jul 11 00:19:23.973683 systemd-modules-load[194]: Inserted module 'overlay' Jul 11 00:19:23.992265 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:19:24.001735 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:19:24.003188 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:19:24.005337 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:19:24.027485 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:19:24.027837 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:19:24.031320 kernel: Bridge firewalling registered Jul 11 00:19:24.029991 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 11 00:19:24.031364 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:19:24.036338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:19:24.040915 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:19:24.051350 dracut-cmdline[220]: dracut-dracut-053 Jul 11 00:19:24.051406 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:19:24.078516 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=836a88100946313ecc17069f63287f86e7d91f7c389df4bcd3f3e02beb9683e1 Jul 11 00:19:24.084440 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:19:24.120634 systemd-resolved[238]: Positive Trust Anchors: Jul 11 00:19:24.120652 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:19:24.120683 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:19:24.123798 systemd-resolved[238]: Defaulting to hostname 'linux'. Jul 11 00:19:24.125319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:19:24.171049 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:19:24.188260 kernel: SCSI subsystem initialized Jul 11 00:19:24.198265 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:19:24.257290 kernel: iscsi: registered transport (tcp) Jul 11 00:19:24.333494 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:19:24.333590 kernel: QLogic iSCSI HBA Driver Jul 11 00:19:24.383280 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:19:24.394451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:19:24.479880 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:19:24.479942 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:19:24.479973 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:19:24.557288 kernel: raid6: avx2x4 gen() 28095 MB/s Jul 11 00:19:24.578296 kernel: raid6: avx2x2 gen() 25620 MB/s Jul 11 00:19:24.620656 kernel: raid6: avx2x1 gen() 23078 MB/s Jul 11 00:19:24.620758 kernel: raid6: using algorithm avx2x4 gen() 28095 MB/s Jul 11 00:19:24.649446 kernel: raid6: .... xor() 6605 MB/s, rmw enabled Jul 11 00:19:24.649553 kernel: raid6: using avx2x2 recovery algorithm Jul 11 00:19:24.700296 kernel: xor: automatically using best checksumming function avx Jul 11 00:19:24.881306 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:19:24.897485 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:19:24.931563 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:19:24.949927 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jul 11 00:19:24.956234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:19:24.977539 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:19:24.992756 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jul 11 00:19:25.037632 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:19:25.064601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:19:25.175456 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:19:25.185947 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:19:25.203140 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:19:25.207106 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:19:25.208400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:19:25.209597 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:19:25.217564 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:19:25.237273 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 11 00:19:25.241504 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:19:25.243332 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:19:25.247265 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:19:25.252232 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:19:25.252321 kernel: GPT:9289727 != 19775487 Jul 11 00:19:25.252335 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:19:25.252349 kernel: GPT:9289727 != 19775487 Jul 11 00:19:25.252361 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:19:25.252373 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:19:25.253549 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:19:25.254818 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:19:25.258037 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:19:25.261732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:19:25.263391 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:19:25.266359 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:19:25.271221 kernel: AVX2 version of gcm_enc/dec engaged. Jul 11 00:19:25.271294 kernel: AES CTR mode by8 optimization enabled Jul 11 00:19:25.277664 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:19:25.293399 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Jul 11 00:19:25.293463 kernel: libata version 3.00 loaded. Jul 11 00:19:25.293484 kernel: BTRFS: device fsid 54fb9359-b495-4b0c-b313-b0e2955e4a38 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (467) Jul 11 00:19:25.298834 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:19:25.302863 kernel: ahci 0000:00:1f.2: version 3.0 Jul 11 00:19:25.303126 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 11 00:19:25.304774 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 11 00:19:25.305025 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 11 00:19:25.309271 kernel: scsi host0: ahci Jul 11 00:19:25.313266 kernel: scsi host1: ahci Jul 11 00:19:25.313547 kernel: scsi host2: ahci Jul 11 00:19:25.315291 kernel: scsi host3: ahci Jul 11 00:19:25.315549 kernel: scsi host4: ahci Jul 11 00:19:25.316303 kernel: scsi host5: ahci Jul 11 00:19:25.316622 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 11 00:19:25.319287 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 11 00:19:25.319340 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 11 00:19:25.319356 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 11 00:19:25.320399 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:19:25.324712 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 11 00:19:25.324753 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 11 00:19:25.329746 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:19:25.337772 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:19:25.356796 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:19:25.360288 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:19:25.418637 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:19:25.421522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:19:25.421637 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:19:25.428519 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:19:25.431910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:19:25.481616 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:19:25.495643 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:19:25.520486 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:19:25.573779 disk-uuid[557]: Primary Header is updated. Jul 11 00:19:25.573779 disk-uuid[557]: Secondary Entries is updated. Jul 11 00:19:25.573779 disk-uuid[557]: Secondary Header is updated. Jul 11 00:19:25.577781 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:19:25.582281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:19:25.587273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:19:25.629821 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 11 00:19:25.629892 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 11 00:19:25.655297 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 11 00:19:25.655449 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 11 00:19:25.657372 kernel: ata3.00: applying bridge limits Jul 11 00:19:25.658279 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 11 00:19:25.659320 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 11 00:19:25.660274 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 11 00:19:25.661291 kernel: ata3.00: configured for UDMA/100 Jul 11 00:19:25.663171 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 11 00:19:25.725270 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 11 00:19:25.725580 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 11 00:19:25.757294 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 11 00:19:26.596333 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:19:26.596411 disk-uuid[571]: The operation has completed successfully. Jul 11 00:19:26.625756 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:19:26.625963 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:19:26.666517 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:19:26.670590 sh[602]: Success Jul 11 00:19:26.697456 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 11 00:19:26.753320 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:19:26.763172 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:19:26.765834 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:19:26.782027 kernel: BTRFS info (device dm-0): first mount of filesystem 54fb9359-b495-4b0c-b313-b0e2955e4a38 Jul 11 00:19:26.782061 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:19:26.782073 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:19:26.783136 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:19:26.784737 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:19:26.790682 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:19:26.792589 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:19:26.799595 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:19:26.830827 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:19:26.836743 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:19:26.836782 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:19:26.836812 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:19:26.840416 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:19:26.850506 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:19:26.853307 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:19:26.967276 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:19:26.980450 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:19:27.022609 systemd-networkd[780]: lo: Link UP Jul 11 00:19:27.022620 systemd-networkd[780]: lo: Gained carrier Jul 11 00:19:27.024373 systemd-networkd[780]: Enumeration completed Jul 11 00:19:27.024592 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:19:27.024931 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:19:27.024937 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:19:27.060972 systemd-networkd[780]: eth0: Link UP Jul 11 00:19:27.060977 systemd-networkd[780]: eth0: Gained carrier Jul 11 00:19:27.060992 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:19:27.062436 systemd[1]: Reached target network.target - Network. Jul 11 00:19:27.098350 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:19:27.223210 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:19:27.236609 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:19:27.304689 ignition[785]: Ignition 2.19.0 Jul 11 00:19:27.304703 ignition[785]: Stage: fetch-offline Jul 11 00:19:27.304745 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:19:27.304756 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:19:27.304907 ignition[785]: parsed url from cmdline: "" Jul 11 00:19:27.304911 ignition[785]: no config URL provided Jul 11 00:19:27.304917 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:19:27.304927 ignition[785]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:19:27.304958 ignition[785]: op(1): [started] loading QEMU firmware config module Jul 11 00:19:27.304963 ignition[785]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:19:27.314176 ignition[785]: op(1): [finished] loading QEMU firmware config module Jul 11 00:19:27.352453 ignition[785]: parsing config with SHA512: 0728c443e691bc560af2f17d42067d6e74e98defefa3acf493fc15baf17c9b5460230e80e8fdec02829c8165123b1a14db68626709fff4b2977598b63b4fd6ba Jul 11 00:19:27.361170 unknown[785]: fetched base config from "system" Jul 11 00:19:27.361193 unknown[785]: fetched user config from "qemu" Jul 11 00:19:27.361747 ignition[785]: fetch-offline: fetch-offline passed Jul 11 00:19:27.361610 systemd-resolved[238]: Detected conflict on linux IN A 10.0.0.89 Jul 11 00:19:27.361856 ignition[785]: Ignition finished successfully Jul 11 00:19:27.361621 systemd-resolved[238]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Jul 11 00:19:27.370338 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:19:27.372147 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:19:27.380535 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:19:27.402877 ignition[794]: Ignition 2.19.0 Jul 11 00:19:27.402891 ignition[794]: Stage: kargs Jul 11 00:19:27.403083 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:19:27.403095 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:19:27.406985 ignition[794]: kargs: kargs passed Jul 11 00:19:27.407047 ignition[794]: Ignition finished successfully Jul 11 00:19:27.412126 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:19:27.424503 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:19:27.440385 ignition[801]: Ignition 2.19.0 Jul 11 00:19:27.458324 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:19:27.440393 ignition[801]: Stage: disks Jul 11 00:19:27.460144 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:19:27.440582 ignition[801]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:19:27.461473 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:19:27.440594 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:19:27.461577 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:19:27.441451 ignition[801]: disks: disks passed Jul 11 00:19:27.461932 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:19:27.441501 ignition[801]: Ignition finished successfully Jul 11 00:19:27.462434 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:19:27.463960 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:19:27.496583 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:19:27.577489 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:19:27.590806 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:19:27.725295 kernel: EXT4-fs (vda9): mounted filesystem 66ba5133-8c5a-461b-b2c1-a823c72af79b r/w with ordered data mode. Quota mode: none. Jul 11 00:19:27.726428 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:19:27.728187 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:19:27.741509 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:19:27.744847 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:19:27.747659 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:19:27.747736 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:19:27.747821 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:19:27.764800 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (820) Jul 11 00:19:27.764838 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:19:27.757693 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:19:27.772780 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:19:27.772826 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:19:27.772022 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:19:27.779286 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:19:27.781585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:19:27.912168 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:19:27.921381 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:19:27.927279 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:19:27.935716 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:19:28.094570 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:19:28.107468 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:19:28.142595 kernel: BTRFS info (device vda6): last unmount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:19:28.140595 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:19:28.143442 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:19:28.243994 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:19:28.262419 ignition[934]: INFO : Ignition 2.19.0 Jul 11 00:19:28.262419 ignition[934]: INFO : Stage: mount Jul 11 00:19:28.331845 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:19:28.331845 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:19:28.331845 ignition[934]: INFO : mount: mount passed Jul 11 00:19:28.331845 ignition[934]: INFO : Ignition finished successfully Jul 11 00:19:28.337806 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:19:28.350530 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:19:28.362209 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:19:28.379905 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (948) Jul 11 00:19:28.382573 kernel: BTRFS info (device vda6): first mount of filesystem 71430075-b555-475f-bdad-1d4a4e6e1dbe Jul 11 00:19:28.382656 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 11 00:19:28.382692 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:19:28.413466 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:19:28.416274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:19:28.445402 ignition[965]: INFO : Ignition 2.19.0 Jul 11 00:19:28.445402 ignition[965]: INFO : Stage: files Jul 11 00:19:28.469570 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:19:28.469570 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:19:28.469570 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:19:28.473833 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:19:28.473833 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:19:28.480126 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:19:28.481930 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:19:28.481930 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:19:28.481256 unknown[965]: wrote ssh authorized keys file for user: core Jul 11 00:19:28.487977 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 11 00:19:28.487977 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 11 00:19:28.526790 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:19:28.924680 systemd-networkd[780]: eth0: Gained IPv6LL Jul 11 00:19:29.209278 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 11 00:19:29.209278 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:19:29.214021 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:19:29.216124 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:19:29.218288 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:19:29.220344 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:19:29.222447 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:19:29.224599 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:19:29.227107 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:19:29.229483 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:19:29.231809 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:19:29.233854 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:19:29.236403 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:19:29.236403 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:19:29.241053 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 11 00:19:29.943290 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 00:19:31.099731 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 11 00:19:31.099731 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 00:19:31.212803 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:19:31.215980 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:19:31.215980 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 00:19:31.215980 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 11 00:19:31.221750 ignition[965]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:19:31.224093 ignition[965]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:19:31.224093 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 11 00:19:31.224093 ignition[965]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:19:31.272267 ignition[965]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:19:31.332540 ignition[965]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:19:31.335451 ignition[965]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:19:31.335451 ignition[965]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:19:31.335451 ignition[965]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:19:31.335451 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:19:31.335451 ignition[965]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:19:31.335451 ignition[965]: INFO : files: files passed Jul 11 00:19:31.335451 ignition[965]: INFO : Ignition finished successfully Jul 11 00:19:31.351401 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:19:31.361617 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:19:31.365361 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:19:31.371697 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:19:31.373553 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:19:31.389772 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:19:31.396332 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:19:31.396332 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:19:31.402034 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:19:31.405829 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:19:31.406704 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:19:31.426103 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:19:31.458611 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:19:31.458766 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:19:31.461571 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:19:31.463647 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:19:31.465801 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:19:31.468824 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:19:31.512479 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:19:31.519524 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:19:31.534286 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:19:31.537095 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:19:31.539821 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:19:31.542002 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:19:31.543274 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:19:31.546197 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:19:31.548831 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:19:31.550998 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:19:31.553729 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:19:31.556400 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:19:31.566139 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:19:31.568611 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:19:31.571535 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:19:31.574064 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:19:31.576507 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:19:31.578491 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:19:31.579714 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:19:31.582185 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:19:31.584436 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:19:31.586819 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:19:31.587800 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:19:31.590402 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:19:31.591411 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:19:31.593855 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:19:31.594927 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:19:31.597296 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:19:31.599036 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:19:31.602334 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:19:31.605022 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:19:31.606838 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:19:31.608720 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:19:31.609584 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:19:31.611614 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:19:31.612510 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:19:31.614610 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:19:31.615849 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:19:31.618344 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:19:31.619298 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:19:31.632450 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:19:31.635329 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:19:31.637443 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:19:31.638756 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:19:31.641566 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:19:31.642927 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:19:31.649462 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:19:31.650547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:19:31.655134 ignition[1019]: INFO : Ignition 2.19.0 Jul 11 00:19:31.655134 ignition[1019]: INFO : Stage: umount Jul 11 00:19:31.756329 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:19:31.756329 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:19:31.756329 ignition[1019]: INFO : umount: umount passed Jul 11 00:19:31.756329 ignition[1019]: INFO : Ignition finished successfully Jul 11 00:19:31.757012 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:19:31.757219 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:19:31.758106 systemd[1]: Stopped target network.target - Network. Jul 11 00:19:31.761083 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:19:31.761150 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:19:31.761724 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:19:31.761776 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:19:31.762075 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:19:31.762121 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:19:31.762630 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:19:31.762688 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:19:31.763186 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:19:31.814311 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:19:31.818358 systemd-networkd[780]: eth0: DHCPv6 lease lost Jul 11 00:19:31.827825 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:19:31.829593 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:19:31.831572 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:19:31.835745 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:19:31.837000 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:19:31.839987 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:19:31.841451 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:19:31.846418 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:19:31.846491 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:19:31.850010 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:19:31.850118 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:19:31.865404 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:19:31.866504 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:19:31.867528 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:19:31.871160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:19:31.871223 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:19:31.874507 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:19:31.874576 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:19:31.876884 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:19:31.877927 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:19:31.881960 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:19:31.893552 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:19:31.901060 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:19:31.903741 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:19:31.904942 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:19:31.909137 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:19:31.909233 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:19:31.912576 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:19:31.912663 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:19:31.913928 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:19:31.913998 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:19:31.918049 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:19:31.918109 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:19:31.921481 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:19:31.921559 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:19:31.932890 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:19:31.935333 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:19:31.935452 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:19:31.938338 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 00:19:31.938412 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:19:31.939733 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:19:31.939800 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:19:31.943922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:19:31.943983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:19:31.947224 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:19:31.947394 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:19:31.949665 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:19:31.974640 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:19:31.983219 systemd[1]: Switching root. Jul 11 00:19:32.023061 systemd-journald[193]: Journal stopped Jul 11 00:19:34.442934 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 11 00:19:34.443007 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:19:34.443033 kernel: SELinux: policy capability open_perms=1 Jul 11 00:19:34.443048 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:19:34.443064 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:19:34.443076 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:19:34.443087 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:19:34.443099 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:19:34.443116 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:19:34.443130 kernel: audit: type=1403 audit(1752193173.252:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:19:34.443152 systemd[1]: Successfully loaded SELinux policy in 42.268ms. Jul 11 00:19:34.443182 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.146ms. Jul 11 00:19:34.443200 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:19:34.443215 systemd[1]: Detected virtualization kvm. Jul 11 00:19:34.443230 systemd[1]: Detected architecture x86-64. Jul 11 00:19:34.443267 systemd[1]: Detected first boot. Jul 11 00:19:34.443284 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:19:34.443299 zram_generator::config[1063]: No configuration found. Jul 11 00:19:34.443322 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:19:34.443347 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:19:34.443362 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:19:34.443378 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:19:34.443394 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:19:34.443420 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:19:34.443434 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:19:34.443490 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:19:34.443528 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:19:34.443558 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:19:34.443590 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:19:34.443606 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:19:34.443622 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:19:34.443639 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:19:34.443652 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:19:34.443669 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:19:34.443682 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:19:34.443695 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:19:34.443709 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 11 00:19:34.443725 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:19:34.443738 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:19:34.443750 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:19:34.443763 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:19:34.443775 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:19:34.443787 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:19:34.443800 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:19:34.443814 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:19:34.443827 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:19:34.443839 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:19:34.443851 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:19:34.443864 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:19:34.443876 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:19:34.443888 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:19:34.443900 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:19:34.443912 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:19:34.443925 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:19:34.443943 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:19:34.443955 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:19:34.443967 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:19:34.443982 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:19:34.443994 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:19:34.444006 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:19:34.444018 systemd[1]: Reached target machines.target - Containers. Jul 11 00:19:34.444030 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:19:34.444048 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:19:34.444061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:19:34.444073 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:19:34.444085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:19:34.444097 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:19:34.444109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:19:34.444121 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:19:34.444133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:19:34.444148 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:19:34.444161 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:19:34.444173 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:19:34.444184 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:19:34.444196 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:19:34.444208 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:19:34.444220 kernel: loop: module loaded Jul 11 00:19:34.444232 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:19:34.444286 systemd-journald[1126]: Collecting audit messages is disabled. Jul 11 00:19:34.444315 systemd-journald[1126]: Journal started Jul 11 00:19:34.444337 systemd-journald[1126]: Runtime Journal (/run/log/journal/b3763c240d1a4772948f3594dbdd2d90) is 6.0M, max 48.3M, 42.2M free. Jul 11 00:19:34.124698 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:19:34.141911 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:19:34.142427 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:19:34.447338 kernel: fuse: init (API version 7.39) Jul 11 00:19:34.449596 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:19:34.459530 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:19:34.462803 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:19:34.464958 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:19:34.464984 systemd[1]: Stopped verity-setup.service. Jul 11 00:19:34.467262 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:19:34.470464 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:19:34.471321 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:19:34.472639 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:19:34.473893 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:19:34.475306 kernel: ACPI: bus type drm_connector registered Jul 11 00:19:34.475560 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:19:34.476931 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:19:34.478197 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:19:34.479573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:19:34.481135 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:19:34.481335 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:19:34.495338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:19:34.495525 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:19:34.497016 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:19:34.497203 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:19:34.498620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:19:34.498796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:19:34.500413 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:19:34.500602 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:19:34.502004 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:19:34.502181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:19:34.503602 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:19:34.513883 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:19:34.534748 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:19:34.543736 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:19:34.557414 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:19:34.587605 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:19:34.589264 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:19:34.589418 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:19:34.592674 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:19:34.626235 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:19:34.631156 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:19:34.632549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:19:34.636933 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:19:34.638459 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:19:34.659912 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:19:34.661519 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:19:34.662811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:19:34.664493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:19:34.666855 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:19:34.695741 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:19:34.699013 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:19:34.700622 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:19:34.725522 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:19:34.735690 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:19:34.774341 systemd-journald[1126]: Time spent on flushing to /var/log/journal/b3763c240d1a4772948f3594dbdd2d90 is 20.159ms for 1002 entries. Jul 11 00:19:34.774341 systemd-journald[1126]: System Journal (/var/log/journal/b3763c240d1a4772948f3594dbdd2d90) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:19:34.942174 systemd-journald[1126]: Received client request to flush runtime journal. Jul 11 00:19:34.942279 kernel: loop0: detected capacity change from 0 to 142488 Jul 11 00:19:34.942308 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:19:34.942328 kernel: loop1: detected capacity change from 0 to 140768 Jul 11 00:19:34.777467 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:19:34.808823 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:19:34.820867 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:19:34.825903 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jul 11 00:19:34.825917 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jul 11 00:19:34.905762 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:19:34.917648 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:19:34.919875 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:19:34.927560 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:19:34.942468 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:19:34.949409 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:19:34.952046 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:19:35.094399 kernel: loop2: detected capacity change from 0 to 229808 Jul 11 00:19:35.123950 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:19:35.125041 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:19:35.141397 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:19:35.145269 kernel: loop3: detected capacity change from 0 to 142488 Jul 11 00:19:35.164848 kernel: loop4: detected capacity change from 0 to 140768 Jul 11 00:19:35.164757 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:19:35.186341 kernel: loop5: detected capacity change from 0 to 229808 Jul 11 00:19:35.199428 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:19:35.202342 (sd-merge)[1200]: Merged extensions into '/usr'. Jul 11 00:19:35.208533 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:19:35.208550 systemd[1]: Reloading... Jul 11 00:19:35.211662 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 11 00:19:35.211682 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Jul 11 00:19:35.419350 zram_generator::config[1233]: No configuration found. Jul 11 00:19:35.626261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:19:35.669170 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:19:35.698986 systemd[1]: Reloading finished in 489 ms. Jul 11 00:19:35.745574 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:19:35.747711 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:19:35.749899 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:19:35.806944 systemd[1]: Starting ensure-sysext.service... Jul 11 00:19:35.810599 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:19:35.828472 systemd[1]: Reloading requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:19:35.828722 systemd[1]: Reloading... Jul 11 00:19:35.865644 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:19:35.866128 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:19:35.867960 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:19:35.868462 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 11 00:19:35.868661 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jul 11 00:19:35.873578 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:19:35.873707 systemd-tmpfiles[1269]: Skipping /boot Jul 11 00:19:35.894525 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:19:35.895576 systemd-tmpfiles[1269]: Skipping /boot Jul 11 00:19:35.979494 zram_generator::config[1304]: No configuration found. Jul 11 00:19:36.083988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:19:36.143317 systemd[1]: Reloading finished in 313 ms. Jul 11 00:19:36.176987 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:19:36.189950 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:19:36.205628 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:19:36.210580 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:19:36.216511 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:19:36.220711 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:19:36.224009 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:19:36.224188 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:19:36.227145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:19:36.234540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:19:36.237017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:19:36.238606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:19:36.238798 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:19:36.245567 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:19:36.248103 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:19:36.249400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:19:36.254784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:19:36.255592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:19:36.261794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:19:36.262055 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:19:36.264843 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:19:36.277129 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:19:36.280365 augenrules[1360]: No rules Jul 11 00:19:36.282149 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:19:36.288406 systemd[1]: Finished ensure-sysext.service. Jul 11 00:19:36.290155 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:19:36.296773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:19:36.297176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:19:36.303947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:19:36.310502 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:19:36.316422 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:19:36.334718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:19:36.336210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:19:36.341048 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:19:36.344649 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:19:36.348467 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:19:36.351014 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 11 00:19:36.351816 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:19:36.362093 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:19:36.364411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:19:36.364676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:19:36.366367 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:19:36.366609 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:19:36.368271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:19:36.368538 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:19:36.370860 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:19:36.371189 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:19:36.379133 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:19:36.379644 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:19:36.379821 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:19:36.390327 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:19:36.404501 systemd-udevd[1380]: Using default interface naming scheme 'v255'. Jul 11 00:19:36.442836 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:19:36.446736 systemd-resolved[1343]: Positive Trust Anchors: Jul 11 00:19:36.446767 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:19:36.446809 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:19:36.457271 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:19:36.460511 systemd-resolved[1343]: Defaulting to hostname 'linux'. Jul 11 00:19:36.463157 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:19:36.470750 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:19:36.500607 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:19:36.502975 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:19:36.507119 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 11 00:19:36.625016 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1404) Jul 11 00:19:36.639568 systemd-networkd[1394]: lo: Link UP Jul 11 00:19:36.639944 systemd-networkd[1394]: lo: Gained carrier Jul 11 00:19:36.640940 systemd-networkd[1394]: Enumeration completed Jul 11 00:19:36.643797 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:19:36.654738 systemd[1]: Reached target network.target - Network. Jul 11 00:19:36.670610 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:19:36.674860 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:19:36.674872 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:19:36.689232 systemd-networkd[1394]: eth0: Link UP Jul 11 00:19:36.689250 systemd-networkd[1394]: eth0: Gained carrier Jul 11 00:19:36.689265 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:19:36.703272 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 11 00:19:36.747962 kernel: ACPI: button: Power Button [PWRF] Jul 11 00:19:36.746361 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:19:36.747378 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Jul 11 00:19:37.377296 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:19:37.377359 systemd-timesyncd[1378]: Initial clock synchronization to Fri 2025-07-11 00:19:37.377168 UTC. Jul 11 00:19:37.377842 systemd-resolved[1343]: Clock change detected. Flushing caches. Jul 11 00:19:37.393117 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:19:37.406061 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:19:37.425921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:19:37.464025 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 11 00:19:37.465583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:19:37.465850 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:19:37.469765 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:19:37.476277 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:19:37.495236 kernel: mousedev: PS/2 mouse device common for all mice Jul 11 00:19:37.541800 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 11 00:19:37.542260 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 11 00:19:37.544312 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 11 00:19:37.544678 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 11 00:19:37.568820 kernel: kvm_amd: TSC scaling supported Jul 11 00:19:37.568926 kernel: kvm_amd: Nested Virtualization enabled Jul 11 00:19:37.568946 kernel: kvm_amd: Nested Paging enabled Jul 11 00:19:37.569318 kernel: kvm_amd: LBR virtualization supported Jul 11 00:19:37.570725 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 11 00:19:37.570762 kernel: kvm_amd: Virtual GIF supported Jul 11 00:19:37.618163 kernel: EDAC MC: Ver: 3.0.0 Jul 11 00:19:37.634620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:19:37.662745 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:19:37.691553 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:19:37.717534 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:19:37.753374 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:19:37.773916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:19:37.775627 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:19:37.777940 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:19:37.779834 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:19:37.782156 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:19:37.783990 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:19:37.785630 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:19:37.787058 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:19:37.787110 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:19:37.788147 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:19:37.790613 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:19:37.794191 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:19:37.821228 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:19:37.838695 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:19:37.840716 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:19:37.842161 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:19:37.843376 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:19:37.865029 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:19:37.865091 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:19:37.866684 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:19:37.869521 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:19:37.871172 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:19:37.902478 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:19:37.906375 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:19:37.927887 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:19:37.930194 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:19:37.933467 jq[1444]: false Jul 11 00:19:37.934907 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:19:37.940380 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:19:37.944343 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:19:37.951460 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:19:37.952297 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:19:37.952984 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:19:37.956438 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:19:37.958760 extend-filesystems[1445]: Found loop3 Jul 11 00:19:38.007586 extend-filesystems[1445]: Found loop4 Jul 11 00:19:38.007586 extend-filesystems[1445]: Found loop5 Jul 11 00:19:38.007586 extend-filesystems[1445]: Found sr0 Jul 11 00:19:38.007586 extend-filesystems[1445]: Found vda Jul 11 00:19:38.007586 extend-filesystems[1445]: Found vda1 Jul 11 00:19:38.007586 extend-filesystems[1445]: Found vda2 Jul 11 00:19:38.007586 extend-filesystems[1445]: Found vda3 Jul 11 00:19:38.007586 extend-filesystems[1445]: Found usr Jul 11 00:19:38.007586 extend-filesystems[1445]: Found vda4 Jul 11 00:19:38.007586 extend-filesystems[1445]: Found vda6 Jul 11 00:19:38.007586 extend-filesystems[1445]: Found vda7 Jul 11 00:19:38.007586 extend-filesystems[1445]: Found vda9 Jul 11 00:19:38.007586 extend-filesystems[1445]: Checking size of /dev/vda9 Jul 11 00:19:38.011641 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:19:38.053297 extend-filesystems[1445]: Resized partition /dev/vda9 Jul 11 00:19:38.090068 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1393) Jul 11 00:19:38.090756 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:19:38.019142 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:19:38.090854 update_engine[1455]: I20250711 00:19:38.028388 1455 main.cc:92] Flatcar Update Engine starting Jul 11 00:19:38.090854 update_engine[1455]: I20250711 00:19:38.080547 1455 update_check_scheduler.cc:74] Next update check in 5m21s Jul 11 00:19:38.061843 dbus-daemon[1443]: [system] SELinux support is enabled Jul 11 00:19:38.091683 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:19:38.047757 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:19:38.048048 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:19:38.096474 jq[1461]: true Jul 11 00:19:38.048483 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:19:38.048700 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:19:38.060586 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:19:38.097384 jq[1470]: true Jul 11 00:19:38.060882 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:19:38.069862 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:19:38.097786 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:19:38.115265 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:19:38.117068 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:19:38.117123 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:19:38.118725 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:19:38.118758 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:19:38.138468 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:19:38.168111 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Jul 11 00:19:38.168141 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 11 00:19:38.169142 systemd-logind[1453]: New seat seat0. Jul 11 00:19:38.182374 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:19:38.272587 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:19:38.273613 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:19:38.276436 tar[1468]: linux-amd64/LICENSE Jul 11 00:19:38.276436 tar[1468]: linux-amd64/helm Jul 11 00:19:38.309246 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:19:38.315462 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:19:38.353891 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:19:38.354201 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:19:38.361343 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:19:38.453124 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:19:38.504527 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:19:38.504527 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:19:38.504527 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:19:38.510198 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Jul 11 00:19:38.508361 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:19:38.508965 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:19:38.511822 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:19:38.513744 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:19:38.516350 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:19:38.530820 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:19:38.536182 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 11 00:19:38.537937 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:19:38.540032 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:19:38.577380 systemd-networkd[1394]: eth0: Gained IPv6LL Jul 11 00:19:38.604766 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:19:38.607567 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:19:38.622468 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:19:38.626168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:19:38.634347 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:19:38.692416 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:19:38.698925 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:37030.service - OpenSSH per-connection server daemon (10.0.0.1:37030). Jul 11 00:19:38.702804 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:19:38.734102 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:19:38.734701 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:19:38.742846 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:19:38.987201 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 37030 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:19:38.991553 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:39.001655 containerd[1471]: time="2025-07-11T00:19:39.001501368Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:19:39.082515 containerd[1471]: time="2025-07-11T00:19:39.082449823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:19:39.085844 containerd[1471]: time="2025-07-11T00:19:39.085799698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:19:39.085938 containerd[1471]: time="2025-07-11T00:19:39.085921747Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:19:39.085994 containerd[1471]: time="2025-07-11T00:19:39.085981579Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.086361352Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.086384335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.086481096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.086497667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.086741435Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.086757224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.086771040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.086781289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.086924228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.087351499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:19:39.087890 containerd[1471]: time="2025-07-11T00:19:39.087554640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:19:39.088215 containerd[1471]: time="2025-07-11T00:19:39.087602029Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:19:39.088215 containerd[1471]: time="2025-07-11T00:19:39.087754986Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:19:39.088215 containerd[1471]: time="2025-07-11T00:19:39.087844133Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:19:39.136726 systemd-logind[1453]: New session 1 of user core. Jul 11 00:19:39.138621 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:19:39.172095 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:19:39.182245 containerd[1471]: time="2025-07-11T00:19:39.182179333Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:19:39.182361 containerd[1471]: time="2025-07-11T00:19:39.182306572Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:19:39.182361 containerd[1471]: time="2025-07-11T00:19:39.182334003Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:19:39.182361 containerd[1471]: time="2025-07-11T00:19:39.182362096Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:19:39.182442 containerd[1471]: time="2025-07-11T00:19:39.182418672Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:19:39.182715 containerd[1471]: time="2025-07-11T00:19:39.182676355Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:19:39.183430 containerd[1471]: time="2025-07-11T00:19:39.183398751Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:19:39.183723 containerd[1471]: time="2025-07-11T00:19:39.183699144Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:19:39.183808 containerd[1471]: time="2025-07-11T00:19:39.183788973Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:19:39.183883 containerd[1471]: time="2025-07-11T00:19:39.183864915Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:19:39.183959 containerd[1471]: time="2025-07-11T00:19:39.183940818Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:19:39.184042 containerd[1471]: time="2025-07-11T00:19:39.184022771Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:19:39.184153 containerd[1471]: time="2025-07-11T00:19:39.184133168Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:19:39.184257 containerd[1471]: time="2025-07-11T00:19:39.184236753Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:19:39.184355 containerd[1471]: time="2025-07-11T00:19:39.184335668Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:19:39.184427 containerd[1471]: time="2025-07-11T00:19:39.184410869Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:19:39.184508 containerd[1471]: time="2025-07-11T00:19:39.184490008Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:19:39.184588 containerd[1471]: time="2025-07-11T00:19:39.184571361Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:19:39.184681 containerd[1471]: time="2025-07-11T00:19:39.184662572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.184802 containerd[1471]: time="2025-07-11T00:19:39.184779782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.185141 containerd[1471]: time="2025-07-11T00:19:39.185116333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.185249 containerd[1471]: time="2025-07-11T00:19:39.185227672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.185341 containerd[1471]: time="2025-07-11T00:19:39.185321378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.185465 containerd[1471]: time="2025-07-11T00:19:39.185441553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.185537 containerd[1471]: time="2025-07-11T00:19:39.185522525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.185617 containerd[1471]: time="2025-07-11T00:19:39.185600251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.185695 containerd[1471]: time="2025-07-11T00:19:39.185676744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.185802 containerd[1471]: time="2025-07-11T00:19:39.185780228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.185907 containerd[1471]: time="2025-07-11T00:19:39.185890325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.185976 containerd[1471]: time="2025-07-11T00:19:39.185962060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.186061 containerd[1471]: time="2025-07-11T00:19:39.186043031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.186161 containerd[1471]: time="2025-07-11T00:19:39.186143700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:19:39.186715 containerd[1471]: time="2025-07-11T00:19:39.186690857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.186806 containerd[1471]: time="2025-07-11T00:19:39.186788851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.186983 containerd[1471]: time="2025-07-11T00:19:39.186958599Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:19:39.187156 containerd[1471]: time="2025-07-11T00:19:39.187134910Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:19:39.187395 containerd[1471]: time="2025-07-11T00:19:39.187371033Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:19:39.187456 containerd[1471]: time="2025-07-11T00:19:39.187442908Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:19:39.187510 containerd[1471]: time="2025-07-11T00:19:39.187495567Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:19:39.187571 containerd[1471]: time="2025-07-11T00:19:39.187554447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.187672 containerd[1471]: time="2025-07-11T00:19:39.187652401Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:19:39.187758 containerd[1471]: time="2025-07-11T00:19:39.187741618Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:19:39.187833 containerd[1471]: time="2025-07-11T00:19:39.187813703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:19:39.188571 containerd[1471]: time="2025-07-11T00:19:39.188489090Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:19:39.188931 containerd[1471]: time="2025-07-11T00:19:39.188899420Z" level=info msg="Connect containerd service" Jul 11 00:19:39.189092 containerd[1471]: time="2025-07-11T00:19:39.189056154Z" level=info msg="using legacy CRI server" Jul 11 00:19:39.189180 containerd[1471]: time="2025-07-11T00:19:39.189160570Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:19:39.189489 containerd[1471]: time="2025-07-11T00:19:39.189463929Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:19:39.190798 containerd[1471]: time="2025-07-11T00:19:39.190767816Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:19:39.191133 containerd[1471]: time="2025-07-11T00:19:39.191045747Z" level=info msg="Start subscribing containerd event" Jul 11 00:19:39.193710 containerd[1471]: time="2025-07-11T00:19:39.191629422Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:19:39.193860 containerd[1471]: time="2025-07-11T00:19:39.193840450Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:19:39.195782 containerd[1471]: time="2025-07-11T00:19:39.195753770Z" level=info msg="Start recovering state" Jul 11 00:19:39.198214 containerd[1471]: time="2025-07-11T00:19:39.198194028Z" level=info msg="Start event monitor" Jul 11 00:19:39.198315 containerd[1471]: time="2025-07-11T00:19:39.198287403Z" level=info msg="Start snapshots syncer" Jul 11 00:19:39.198387 containerd[1471]: time="2025-07-11T00:19:39.198373514Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:19:39.198446 containerd[1471]: time="2025-07-11T00:19:39.198431934Z" level=info msg="Start streaming server" Jul 11 00:19:39.198907 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:19:39.199195 containerd[1471]: time="2025-07-11T00:19:39.199177633Z" level=info msg="containerd successfully booted in 0.199710s" Jul 11 00:19:39.228442 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:19:39.262390 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:19:39.271660 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:19:39.339003 tar[1468]: linux-amd64/README.md Jul 11 00:19:39.443760 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:19:39.575385 systemd[1552]: Queued start job for default target default.target. Jul 11 00:19:39.602214 systemd[1552]: Created slice app.slice - User Application Slice. Jul 11 00:19:39.602248 systemd[1552]: Reached target paths.target - Paths. Jul 11 00:19:39.602264 systemd[1552]: Reached target timers.target - Timers. Jul 11 00:19:39.604338 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:19:39.620014 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:19:39.620263 systemd[1552]: Reached target sockets.target - Sockets. Jul 11 00:19:39.620340 systemd[1552]: Reached target basic.target - Basic System. Jul 11 00:19:39.620424 systemd[1552]: Reached target default.target - Main User Target. Jul 11 00:19:39.620498 systemd[1552]: Startup finished in 337ms. Jul 11 00:19:39.620599 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:19:39.633415 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:19:39.715098 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:37034.service - OpenSSH per-connection server daemon (10.0.0.1:37034). Jul 11 00:19:39.772301 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 37034 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:19:39.774756 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:39.782980 systemd-logind[1453]: New session 2 of user core. Jul 11 00:19:39.797564 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:19:39.870994 sshd[1566]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:39.890346 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:37034.service: Deactivated successfully. Jul 11 00:19:39.893460 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:19:39.900977 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:19:39.920825 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:37040.service - OpenSSH per-connection server daemon (10.0.0.1:37040). Jul 11 00:19:39.924697 systemd-logind[1453]: Removed session 2. Jul 11 00:19:39.983187 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 37040 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:19:39.988224 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:39.994311 systemd-logind[1453]: New session 3 of user core. Jul 11 00:19:40.105520 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:19:40.180301 sshd[1573]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:40.215924 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:37040.service: Deactivated successfully. Jul 11 00:19:40.219619 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:19:40.221337 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:19:40.236673 systemd-logind[1453]: Removed session 3. Jul 11 00:19:41.049066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:19:41.051920 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:19:41.055013 systemd[1]: Startup finished in 1.229s (kernel) + 9.491s (initrd) + 7.214s (userspace) = 17.934s. Jul 11 00:19:41.056433 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:19:42.511998 kubelet[1584]: E0711 00:19:42.511900 1584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:19:42.517467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:19:42.517744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:19:42.518236 systemd[1]: kubelet.service: Consumed 3.406s CPU time. Jul 11 00:19:50.188571 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:48472.service - OpenSSH per-connection server daemon (10.0.0.1:48472). Jul 11 00:19:50.225818 systemd-logind[1453]: New session 4 of user core. Jul 11 00:19:50.221937 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:50.235217 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:19:50.238339 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 48472 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:19:50.290278 sshd[1598]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:50.303188 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:48472.service: Deactivated successfully. Jul 11 00:19:50.305223 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:19:50.306896 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:19:50.318632 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:48486.service - OpenSSH per-connection server daemon (10.0.0.1:48486). Jul 11 00:19:50.319783 systemd-logind[1453]: Removed session 4. Jul 11 00:19:50.350424 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 48486 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:19:50.352564 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:50.357224 systemd-logind[1453]: New session 5 of user core. Jul 11 00:19:50.373371 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:19:50.426179 sshd[1605]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:50.443309 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:48486.service: Deactivated successfully. Jul 11 00:19:50.445490 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:19:50.447178 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:19:50.448588 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:48488.service - OpenSSH per-connection server daemon (10.0.0.1:48488). Jul 11 00:19:50.449480 systemd-logind[1453]: Removed session 5. Jul 11 00:19:50.483500 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 48488 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:19:50.485559 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:50.490322 systemd-logind[1453]: New session 6 of user core. Jul 11 00:19:50.500325 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:19:50.560106 sshd[1612]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:50.571086 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:48488.service: Deactivated successfully. Jul 11 00:19:50.572723 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:19:50.574289 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:19:50.575520 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:48490.service - OpenSSH per-connection server daemon (10.0.0.1:48490). Jul 11 00:19:50.576283 systemd-logind[1453]: Removed session 6. Jul 11 00:19:50.609592 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 48490 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:19:50.611931 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:50.616438 systemd-logind[1453]: New session 7 of user core. Jul 11 00:19:50.626210 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:19:50.688827 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:19:50.689347 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:19:50.711727 sudo[1622]: pam_unix(sudo:session): session closed for user root Jul 11 00:19:50.714028 sshd[1619]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:50.731882 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:48490.service: Deactivated successfully. Jul 11 00:19:50.734622 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:19:50.736866 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:19:50.738696 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:48492.service - OpenSSH per-connection server daemon (10.0.0.1:48492). Jul 11 00:19:50.739673 systemd-logind[1453]: Removed session 7. Jul 11 00:19:50.770706 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 48492 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:19:50.772465 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:50.776646 systemd-logind[1453]: New session 8 of user core. Jul 11 00:19:50.787214 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:19:50.843317 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:19:50.843700 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:19:50.849012 sudo[1631]: pam_unix(sudo:session): session closed for user root Jul 11 00:19:50.856373 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:19:50.856733 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:19:50.882601 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:19:50.885179 auditctl[1634]: No rules Jul 11 00:19:50.885736 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:19:50.886032 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:19:50.889357 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:19:50.927195 augenrules[1652]: No rules Jul 11 00:19:50.929435 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:19:50.930694 sudo[1630]: pam_unix(sudo:session): session closed for user root Jul 11 00:19:50.932656 sshd[1627]: pam_unix(sshd:session): session closed for user core Jul 11 00:19:50.943274 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:48492.service: Deactivated successfully. Jul 11 00:19:50.945175 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:19:50.946609 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:19:50.948350 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:48504.service - OpenSSH per-connection server daemon (10.0.0.1:48504). Jul 11 00:19:50.949214 systemd-logind[1453]: Removed session 8. Jul 11 00:19:50.984738 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 48504 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:19:50.986590 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:19:50.990976 systemd-logind[1453]: New session 9 of user core. Jul 11 00:19:51.000211 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:19:51.055762 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:19:51.056251 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:19:51.947370 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:19:51.947479 (dockerd)[1681]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:19:52.650536 dockerd[1681]: time="2025-07-11T00:19:52.650445672Z" level=info msg="Starting up" Jul 11 00:19:52.656935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:19:52.668368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:19:53.054443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:19:53.061474 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:19:53.159616 kubelet[1710]: E0711 00:19:53.159431 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:19:53.167592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:19:53.167877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:19:53.198441 dockerd[1681]: time="2025-07-11T00:19:53.198367042Z" level=info msg="Loading containers: start." Jul 11 00:19:53.333114 kernel: Initializing XFRM netlink socket Jul 11 00:19:53.427539 systemd-networkd[1394]: docker0: Link UP Jul 11 00:19:53.460484 dockerd[1681]: time="2025-07-11T00:19:53.460410333Z" level=info msg="Loading containers: done." Jul 11 00:19:53.480594 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4120946016-merged.mount: Deactivated successfully. Jul 11 00:19:53.486132 dockerd[1681]: time="2025-07-11T00:19:53.486062308Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:19:53.486262 dockerd[1681]: time="2025-07-11T00:19:53.486233179Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:19:53.486456 dockerd[1681]: time="2025-07-11T00:19:53.486427664Z" level=info msg="Daemon has completed initialization" Jul 11 00:19:53.539783 dockerd[1681]: time="2025-07-11T00:19:53.538211404Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:19:53.540021 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:19:54.306283 containerd[1471]: time="2025-07-11T00:19:54.306214813Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 11 00:19:54.989646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084967727.mount: Deactivated successfully. Jul 11 00:19:56.472216 containerd[1471]: time="2025-07-11T00:19:56.472137649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:19:56.473525 containerd[1471]: time="2025-07-11T00:19:56.473471401Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 11 00:19:56.475254 containerd[1471]: time="2025-07-11T00:19:56.475217507Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:19:56.479885 containerd[1471]: time="2025-07-11T00:19:56.479784535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:19:56.481345 containerd[1471]: time="2025-07-11T00:19:56.481291302Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.175024121s" Jul 11 00:19:56.481345 containerd[1471]: time="2025-07-11T00:19:56.481345614Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 11 00:19:56.482381 containerd[1471]: time="2025-07-11T00:19:56.482335771Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 11 00:19:59.624180 containerd[1471]: time="2025-07-11T00:19:59.624092060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:00.418962 containerd[1471]: time="2025-07-11T00:20:00.418848340Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 11 00:20:00.421657 containerd[1471]: time="2025-07-11T00:20:00.421582429Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:00.426140 containerd[1471]: time="2025-07-11T00:20:00.426035093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:00.427656 containerd[1471]: time="2025-07-11T00:20:00.427563380Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 3.945181983s" Jul 11 00:20:00.427656 containerd[1471]: time="2025-07-11T00:20:00.427638040Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 11 00:20:00.428932 containerd[1471]: time="2025-07-11T00:20:00.428874930Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 11 00:20:01.860054 containerd[1471]: time="2025-07-11T00:20:01.859951272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:01.860901 containerd[1471]: time="2025-07-11T00:20:01.860840891Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 11 00:20:01.862562 containerd[1471]: time="2025-07-11T00:20:01.862516054Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:01.866618 containerd[1471]: time="2025-07-11T00:20:01.866526337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:01.867818 containerd[1471]: time="2025-07-11T00:20:01.867766494Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.438780004s" Jul 11 00:20:01.867818 containerd[1471]: time="2025-07-11T00:20:01.867815055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 11 00:20:01.868442 containerd[1471]: time="2025-07-11T00:20:01.868407216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 11 00:20:03.089908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084668692.mount: Deactivated successfully. Jul 11 00:20:03.202219 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:20:03.208637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:20:03.630313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:20:03.686774 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:20:04.634061 containerd[1471]: time="2025-07-11T00:20:04.633961513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:04.635001 containerd[1471]: time="2025-07-11T00:20:04.634963082Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 11 00:20:04.636158 containerd[1471]: time="2025-07-11T00:20:04.636110284Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:04.640288 containerd[1471]: time="2025-07-11T00:20:04.640200418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:04.641012 containerd[1471]: time="2025-07-11T00:20:04.640937380Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.772491632s" Jul 11 00:20:04.641012 containerd[1471]: time="2025-07-11T00:20:04.640995489Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 11 00:20:04.641696 containerd[1471]: time="2025-07-11T00:20:04.641656780Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 11 00:20:04.666047 kubelet[1922]: E0711 00:20:04.665963 1922 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:20:04.670944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:20:04.671235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:20:05.300040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611606298.mount: Deactivated successfully. Jul 11 00:20:07.323744 containerd[1471]: time="2025-07-11T00:20:07.323582745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:07.324885 containerd[1471]: time="2025-07-11T00:20:07.324785441Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 11 00:20:07.326733 containerd[1471]: time="2025-07-11T00:20:07.326632586Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:07.331051 containerd[1471]: time="2025-07-11T00:20:07.331001302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:07.332940 containerd[1471]: time="2025-07-11T00:20:07.332861843Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.691161672s" Jul 11 00:20:07.332940 containerd[1471]: time="2025-07-11T00:20:07.332914472Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 11 00:20:07.333774 containerd[1471]: time="2025-07-11T00:20:07.333717278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:20:07.869297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873685184.mount: Deactivated successfully. Jul 11 00:20:07.881268 containerd[1471]: time="2025-07-11T00:20:07.881065282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:07.882930 containerd[1471]: time="2025-07-11T00:20:07.882839480Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 11 00:20:07.885374 containerd[1471]: time="2025-07-11T00:20:07.885179610Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:07.890833 containerd[1471]: time="2025-07-11T00:20:07.888914958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:07.890833 containerd[1471]: time="2025-07-11T00:20:07.889942786Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 556.190964ms" Jul 11 00:20:07.890833 containerd[1471]: time="2025-07-11T00:20:07.889980537Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 11 00:20:07.890833 containerd[1471]: time="2025-07-11T00:20:07.890557600Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 11 00:20:09.048626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount126123561.mount: Deactivated successfully. Jul 11 00:20:13.032288 containerd[1471]: time="2025-07-11T00:20:13.032207350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:13.033407 containerd[1471]: time="2025-07-11T00:20:13.033335535Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 11 00:20:13.035377 containerd[1471]: time="2025-07-11T00:20:13.035272589Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:13.039662 containerd[1471]: time="2025-07-11T00:20:13.039568740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:13.041833 containerd[1471]: time="2025-07-11T00:20:13.041756276Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.151156847s" Jul 11 00:20:13.041833 containerd[1471]: time="2025-07-11T00:20:13.041832201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 11 00:20:14.702959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 11 00:20:14.715448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:20:14.943017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:20:14.948224 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:20:14.996574 kubelet[2073]: E0711 00:20:14.996330 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:20:15.001842 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:20:15.002160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:20:17.337555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:20:17.349480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:20:17.379908 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-9.scope)... Jul 11 00:20:17.379936 systemd[1]: Reloading... Jul 11 00:20:17.485135 zram_generator::config[2129]: No configuration found. Jul 11 00:20:17.954652 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:20:18.042382 systemd[1]: Reloading finished in 661 ms. Jul 11 00:20:18.115053 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:20:18.115222 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:20:18.115610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:20:18.131715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:20:20.610388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:20:20.616727 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:20:20.718360 kubelet[2176]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:20:20.718360 kubelet[2176]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:20:20.718360 kubelet[2176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:20:20.718360 kubelet[2176]: I0711 00:20:20.718220 2176 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:20:21.217427 kubelet[2176]: I0711 00:20:21.217335 2176 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:20:21.217427 kubelet[2176]: I0711 00:20:21.217392 2176 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:20:21.217801 kubelet[2176]: I0711 00:20:21.217735 2176 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:20:21.269597 kubelet[2176]: E0711 00:20:21.269512 2176 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 11 00:20:21.280919 kubelet[2176]: I0711 00:20:21.280809 2176 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:20:21.296103 kubelet[2176]: E0711 00:20:21.295996 2176 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:20:21.296103 kubelet[2176]: I0711 00:20:21.296097 2176 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:20:21.311112 kubelet[2176]: I0711 00:20:21.311045 2176 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:20:21.311535 kubelet[2176]: I0711 00:20:21.311489 2176 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:20:21.311753 kubelet[2176]: I0711 00:20:21.311533 2176 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:20:21.311875 kubelet[2176]: I0711 00:20:21.311773 2176 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:20:21.311875 kubelet[2176]: I0711 00:20:21.311785 2176 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:20:21.312023 kubelet[2176]: I0711 00:20:21.312002 2176 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:20:21.316899 kubelet[2176]: I0711 00:20:21.316860 2176 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:20:21.316899 kubelet[2176]: I0711 00:20:21.316886 2176 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:20:21.316992 kubelet[2176]: I0711 00:20:21.316919 2176 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:20:21.316992 kubelet[2176]: I0711 00:20:21.316948 2176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:20:21.342166 kubelet[2176]: I0711 00:20:21.342058 2176 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:20:21.342844 kubelet[2176]: E0711 00:20:21.342699 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:20:21.342844 kubelet[2176]: E0711 00:20:21.342693 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 11 00:20:21.342844 kubelet[2176]: I0711 00:20:21.342768 2176 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:20:21.343849 kubelet[2176]: W0711 00:20:21.343803 2176 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:20:21.347488 kubelet[2176]: I0711 00:20:21.347452 2176 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:20:21.347611 kubelet[2176]: I0711 00:20:21.347546 2176 server.go:1289] "Started kubelet" Jul 11 00:20:21.348205 kubelet[2176]: I0711 00:20:21.347731 2176 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:20:21.348205 kubelet[2176]: I0711 00:20:21.347919 2176 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:20:21.349354 kubelet[2176]: I0711 00:20:21.348658 2176 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:20:21.349354 kubelet[2176]: I0711 00:20:21.348983 2176 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:20:21.361950 kubelet[2176]: I0711 00:20:21.361781 2176 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:20:21.362493 kubelet[2176]: I0711 00:20:21.362437 2176 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:20:21.367433 kubelet[2176]: E0711 00:20:21.366011 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:20:21.367433 kubelet[2176]: E0711 00:20:21.367319 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Jul 11 00:20:21.367884 kubelet[2176]: I0711 00:20:21.367804 2176 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:20:21.368037 kubelet[2176]: I0711 00:20:21.368005 2176 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:20:21.369555 kubelet[2176]: I0711 00:20:21.369478 2176 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:20:21.370029 kubelet[2176]: I0711 00:20:21.369819 2176 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:20:21.370029 kubelet[2176]: I0711 00:20:21.369865 2176 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:20:21.370772 kubelet[2176]: E0711 00:20:21.370736 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:20:21.372141 kubelet[2176]: I0711 00:20:21.371491 2176 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:20:21.374857 kubelet[2176]: E0711 00:20:21.374558 2176 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:20:21.374987 kubelet[2176]: I0711 00:20:21.374939 2176 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:20:21.377125 kubelet[2176]: E0711 00:20:21.372127 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a7388439ca8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:20:21.347482792 +0000 UTC m=+0.725521718,LastTimestamp:2025-07-11 00:20:21.347482792 +0000 UTC m=+0.725521718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:20:21.395608 kubelet[2176]: I0711 00:20:21.394183 2176 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:20:21.395608 kubelet[2176]: I0711 00:20:21.394213 2176 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:20:21.395608 kubelet[2176]: I0711 00:20:21.394243 2176 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:20:21.466486 kubelet[2176]: E0711 00:20:21.466405 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:20:21.567234 kubelet[2176]: E0711 00:20:21.567023 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:20:21.569051 kubelet[2176]: E0711 00:20:21.568997 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Jul 11 00:20:21.667298 kubelet[2176]: E0711 00:20:21.667177 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:20:21.768065 kubelet[2176]: E0711 00:20:21.768001 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:20:21.846600 kubelet[2176]: I0711 00:20:21.846435 2176 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:20:21.846600 kubelet[2176]: I0711 00:20:21.846485 2176 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:20:21.846600 kubelet[2176]: I0711 00:20:21.846532 2176 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:20:21.846600 kubelet[2176]: I0711 00:20:21.846546 2176 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:20:21.846818 kubelet[2176]: E0711 00:20:21.846614 2176 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:20:21.847515 kubelet[2176]: E0711 00:20:21.847487 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 11 00:20:21.869071 kubelet[2176]: E0711 00:20:21.869010 2176 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:20:21.871444 kubelet[2176]: I0711 00:20:21.871308 2176 policy_none.go:49] "None policy: Start" Jul 11 00:20:21.871444 kubelet[2176]: I0711 00:20:21.871346 2176 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:20:21.871444 kubelet[2176]: I0711 00:20:21.871373 2176 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:20:21.891237 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:20:21.915560 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:20:21.921595 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:20:21.937867 kubelet[2176]: E0711 00:20:21.937782 2176 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:20:21.938231 kubelet[2176]: I0711 00:20:21.938186 2176 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:20:21.938359 kubelet[2176]: I0711 00:20:21.938222 2176 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:20:21.938733 kubelet[2176]: I0711 00:20:21.938694 2176 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:20:21.940324 kubelet[2176]: E0711 00:20:21.940240 2176 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:20:21.940399 kubelet[2176]: E0711 00:20:21.940340 2176 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:20:21.963332 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 11 00:20:21.971180 kubelet[2176]: E0711 00:20:21.970320 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Jul 11 00:20:21.974599 kubelet[2176]: I0711 00:20:21.974512 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:21.974599 kubelet[2176]: I0711 00:20:21.974605 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:21.974811 kubelet[2176]: I0711 00:20:21.974669 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:21.974811 kubelet[2176]: I0711 00:20:21.974698 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:21.974904 kubelet[2176]: I0711 00:20:21.974790 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/967802bba9fa48dce642df136e10276d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"967802bba9fa48dce642df136e10276d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:21.974904 kubelet[2176]: I0711 00:20:21.974838 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/967802bba9fa48dce642df136e10276d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"967802bba9fa48dce642df136e10276d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:21.974904 kubelet[2176]: I0711 00:20:21.974870 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:21.974904 kubelet[2176]: I0711 00:20:21.974893 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:20:21.975030 kubelet[2176]: I0711 00:20:21.974917 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/967802bba9fa48dce642df136e10276d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"967802bba9fa48dce642df136e10276d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:21.987215 kubelet[2176]: E0711 00:20:21.987161 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:20:21.991651 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 11 00:20:22.006975 kubelet[2176]: E0711 00:20:22.006914 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:20:22.010671 systemd[1]: Created slice kubepods-burstable-pod967802bba9fa48dce642df136e10276d.slice - libcontainer container kubepods-burstable-pod967802bba9fa48dce642df136e10276d.slice. Jul 11 00:20:22.014212 kubelet[2176]: E0711 00:20:22.014159 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:20:22.040838 kubelet[2176]: I0711 00:20:22.040782 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:20:22.041437 kubelet[2176]: E0711 00:20:22.041390 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jul 11 00:20:22.159387 kubelet[2176]: E0711 00:20:22.159184 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:20:22.244885 kubelet[2176]: I0711 00:20:22.244823 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:20:22.286020 kubelet[2176]: E0711 00:20:22.245411 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jul 11 00:20:22.288676 kubelet[2176]: E0711 00:20:22.288599 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:22.289684 containerd[1471]: time="2025-07-11T00:20:22.289616936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 11 00:20:22.308164 kubelet[2176]: E0711 00:20:22.308108 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:22.311990 containerd[1471]: time="2025-07-11T00:20:22.311936912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 11 00:20:22.315416 kubelet[2176]: E0711 00:20:22.315377 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:22.315883 containerd[1471]: time="2025-07-11T00:20:22.315845477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:967802bba9fa48dce642df136e10276d,Namespace:kube-system,Attempt:0,}" Jul 11 00:20:22.363831 kubelet[2176]: E0711 00:20:22.363781 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:20:22.688648 kubelet[2176]: I0711 00:20:22.688570 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:20:22.689243 kubelet[2176]: E0711 00:20:22.689170 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jul 11 00:20:22.715386 kubelet[2176]: E0711 00:20:22.715251 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 11 00:20:22.771572 kubelet[2176]: E0711 00:20:22.771480 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="1.6s" Jul 11 00:20:22.854449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753389246.mount: Deactivated successfully. Jul 11 00:20:22.868664 containerd[1471]: time="2025-07-11T00:20:22.868589763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:20:22.869928 containerd[1471]: time="2025-07-11T00:20:22.869886667Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:20:22.870902 containerd[1471]: time="2025-07-11T00:20:22.870809541Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 11 00:20:22.872135 containerd[1471]: time="2025-07-11T00:20:22.872049586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:20:22.873602 containerd[1471]: time="2025-07-11T00:20:22.873535028Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:20:22.874832 containerd[1471]: time="2025-07-11T00:20:22.874767728Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:20:22.876593 containerd[1471]: time="2025-07-11T00:20:22.876524977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:20:22.881284 containerd[1471]: time="2025-07-11T00:20:22.879775972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:20:22.882964 containerd[1471]: time="2025-07-11T00:20:22.882925885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.872031ms" Jul 11 00:20:22.884467 containerd[1471]: time="2025-07-11T00:20:22.884439902Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 568.530152ms" Jul 11 00:20:22.885299 containerd[1471]: time="2025-07-11T00:20:22.885220013Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.498599ms" Jul 11 00:20:22.928702 kubelet[2176]: E0711 00:20:22.928089 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 11 00:20:23.122261 update_engine[1455]: I20250711 00:20:23.122151 1455 update_attempter.cc:509] Updating boot flags... Jul 11 00:20:23.363138 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2253) Jul 11 00:20:23.400315 containerd[1471]: time="2025-07-11T00:20:23.398522408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:20:23.400315 containerd[1471]: time="2025-07-11T00:20:23.398690417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:20:23.400315 containerd[1471]: time="2025-07-11T00:20:23.398709813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:20:23.400315 containerd[1471]: time="2025-07-11T00:20:23.398865488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:20:23.403962 containerd[1471]: time="2025-07-11T00:20:23.403322719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:20:23.407229 kubelet[2176]: E0711 00:20:23.406992 2176 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 11 00:20:23.408876 containerd[1471]: time="2025-07-11T00:20:23.403925102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:20:23.408876 containerd[1471]: time="2025-07-11T00:20:23.403975508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:20:23.409709 containerd[1471]: time="2025-07-11T00:20:23.409426484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:20:23.409709 containerd[1471]: time="2025-07-11T00:20:23.409484864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:20:23.409709 containerd[1471]: time="2025-07-11T00:20:23.409495494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:20:23.409709 containerd[1471]: time="2025-07-11T00:20:23.409596707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:20:23.411693 containerd[1471]: time="2025-07-11T00:20:23.411591734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:20:23.414097 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2255) Jul 11 00:20:23.493518 kubelet[2176]: I0711 00:20:23.493420 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:20:23.495435 kubelet[2176]: E0711 00:20:23.495378 2176 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Jul 11 00:20:23.527504 systemd[1]: Started cri-containerd-04b67a8add0fd983910ebe00b2451a2fbc78b2065c99cdfe8dfda56f8eba7478.scope - libcontainer container 04b67a8add0fd983910ebe00b2451a2fbc78b2065c99cdfe8dfda56f8eba7478. Jul 11 00:20:23.542529 systemd[1]: Started cri-containerd-81ed75ebe35422d90eec5631c96535646bacf5af7ea86e8b697eae03edcfd1f8.scope - libcontainer container 81ed75ebe35422d90eec5631c96535646bacf5af7ea86e8b697eae03edcfd1f8. Jul 11 00:20:23.547527 systemd[1]: Started cri-containerd-9a6b17ec8ffbb7f2eae2046ed9e424efc6038bf245b2f64802632d87b0ea1846.scope - libcontainer container 9a6b17ec8ffbb7f2eae2046ed9e424efc6038bf245b2f64802632d87b0ea1846. Jul 11 00:20:23.583055 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2255) Jul 11 00:20:23.679576 containerd[1471]: time="2025-07-11T00:20:23.678718073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:967802bba9fa48dce642df136e10276d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a6b17ec8ffbb7f2eae2046ed9e424efc6038bf245b2f64802632d87b0ea1846\"" Jul 11 00:20:23.681145 containerd[1471]: time="2025-07-11T00:20:23.681061251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"04b67a8add0fd983910ebe00b2451a2fbc78b2065c99cdfe8dfda56f8eba7478\"" Jul 11 00:20:23.682064 kubelet[2176]: E0711 00:20:23.682030 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:23.682332 kubelet[2176]: E0711 00:20:23.682276 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:23.705114 containerd[1471]: time="2025-07-11T00:20:23.705038711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"81ed75ebe35422d90eec5631c96535646bacf5af7ea86e8b697eae03edcfd1f8\"" Jul 11 00:20:23.706095 kubelet[2176]: E0711 00:20:23.706041 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:23.751806 kubelet[2176]: E0711 00:20:23.751663 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a7388439ca8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:20:21.347482792 +0000 UTC m=+0.725521718,LastTimestamp:2025-07-11 00:20:21.347482792 +0000 UTC m=+0.725521718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:20:23.937375 containerd[1471]: time="2025-07-11T00:20:23.937242025Z" level=info msg="CreateContainer within sandbox \"04b67a8add0fd983910ebe00b2451a2fbc78b2065c99cdfe8dfda56f8eba7478\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:20:23.949203 kubelet[2176]: E0711 00:20:23.949154 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 11 00:20:24.372433 kubelet[2176]: E0711 00:20:24.372384 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="3.2s" Jul 11 00:20:24.420262 containerd[1471]: time="2025-07-11T00:20:24.420207846Z" level=info msg="CreateContainer within sandbox \"9a6b17ec8ffbb7f2eae2046ed9e424efc6038bf245b2f64802632d87b0ea1846\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:20:24.522109 kubelet[2176]: E0711 00:20:24.522032 2176 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 11 00:20:24.639740 containerd[1471]: time="2025-07-11T00:20:24.639566068Z" level=info msg="CreateContainer within sandbox \"81ed75ebe35422d90eec5631c96535646bacf5af7ea86e8b697eae03edcfd1f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:20:24.760985 containerd[1471]: time="2025-07-11T00:20:24.760889099Z" level=info msg="CreateContainer within sandbox \"04b67a8add0fd983910ebe00b2451a2fbc78b2065c99cdfe8dfda56f8eba7478\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"99b50e5b6c82f5078a6f8f2cf5ca263239101bb6a5021402a791158bdf269a02\"" Jul 11 00:20:24.761948 containerd[1471]: time="2025-07-11T00:20:24.761885550Z" level=info msg="StartContainer for \"99b50e5b6c82f5078a6f8f2cf5ca263239101bb6a5021402a791158bdf269a02\"" Jul 11 00:20:24.780295 containerd[1471]: time="2025-07-11T00:20:24.780192361Z" level=info msg="CreateContainer within sandbox \"9a6b17ec8ffbb7f2eae2046ed9e424efc6038bf245b2f64802632d87b0ea1846\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e156f8a6307d3fb0f0d2b72944fbf3e8154d81fa2261e7ff32045c01a8d43286\"" Jul 11 00:20:24.781153 containerd[1471]: time="2025-07-11T00:20:24.781039117Z" level=info msg="StartContainer for \"e156f8a6307d3fb0f0d2b72944fbf3e8154d81fa2261e7ff32045c01a8d43286\"" Jul 11 00:20:24.781549 containerd[1471]: time="2025-07-11T00:20:24.781486285Z" level=info msg="CreateContainer within sandbox \"81ed75ebe35422d90eec5631c96535646bacf5af7ea86e8b697eae03edcfd1f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0e201b1b8e971d971655e2ef210e743667ed0113a0c66e71aeef8b1e2b188ca4\"" Jul 11 00:20:24.782196 containerd[1471]: time="2025-07-11T00:20:24.782153291Z" level=info msg="StartContainer for \"0e201b1b8e971d971655e2ef210e743667ed0113a0c66e71aeef8b1e2b188ca4\"" Jul 11 00:20:24.800678 systemd[1]: Started cri-containerd-99b50e5b6c82f5078a6f8f2cf5ca263239101bb6a5021402a791158bdf269a02.scope - libcontainer container 99b50e5b6c82f5078a6f8f2cf5ca263239101bb6a5021402a791158bdf269a02. Jul 11 00:20:24.821402 systemd[1]: Started cri-containerd-0e201b1b8e971d971655e2ef210e743667ed0113a0c66e71aeef8b1e2b188ca4.scope - libcontainer container 0e201b1b8e971d971655e2ef210e743667ed0113a0c66e71aeef8b1e2b188ca4. Jul 11 00:20:24.826166 systemd[1]: Started cri-containerd-e156f8a6307d3fb0f0d2b72944fbf3e8154d81fa2261e7ff32045c01a8d43286.scope - libcontainer container e156f8a6307d3fb0f0d2b72944fbf3e8154d81fa2261e7ff32045c01a8d43286. Jul 11 00:20:24.883343 containerd[1471]: time="2025-07-11T00:20:24.883249568Z" level=info msg="StartContainer for \"99b50e5b6c82f5078a6f8f2cf5ca263239101bb6a5021402a791158bdf269a02\" returns successfully" Jul 11 00:20:24.895182 containerd[1471]: time="2025-07-11T00:20:24.894055723Z" level=info msg="StartContainer for \"0e201b1b8e971d971655e2ef210e743667ed0113a0c66e71aeef8b1e2b188ca4\" returns successfully" Jul 11 00:20:24.907938 containerd[1471]: time="2025-07-11T00:20:24.907713598Z" level=info msg="StartContainer for \"e156f8a6307d3fb0f0d2b72944fbf3e8154d81fa2261e7ff32045c01a8d43286\" returns successfully" Jul 11 00:20:25.099221 kubelet[2176]: I0711 00:20:25.098117 2176 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:20:25.881703 kubelet[2176]: E0711 00:20:25.881658 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:20:25.882188 kubelet[2176]: E0711 00:20:25.881832 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:25.884955 kubelet[2176]: E0711 00:20:25.884875 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:20:25.885128 kubelet[2176]: E0711 00:20:25.885027 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:25.888058 kubelet[2176]: E0711 00:20:25.887974 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:20:25.888228 kubelet[2176]: E0711 00:20:25.888200 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:26.898142 kubelet[2176]: E0711 00:20:26.898085 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:20:26.898758 kubelet[2176]: E0711 00:20:26.898317 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:26.899188 kubelet[2176]: E0711 00:20:26.899158 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:20:26.899446 kubelet[2176]: E0711 00:20:26.899412 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:26.899782 kubelet[2176]: E0711 00:20:26.899749 2176 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:20:26.899893 kubelet[2176]: E0711 00:20:26.899865 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:27.541041 kubelet[2176]: I0711 00:20:27.540946 2176 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:20:27.541041 kubelet[2176]: E0711 00:20:27.541031 2176 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:20:27.569592 kubelet[2176]: I0711 00:20:27.569486 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:27.580774 kubelet[2176]: E0711 00:20:27.580701 2176 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:27.580774 kubelet[2176]: I0711 00:20:27.580754 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:20:27.582774 kubelet[2176]: E0711 00:20:27.582720 2176 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:20:27.582774 kubelet[2176]: I0711 00:20:27.582767 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:27.585066 kubelet[2176]: E0711 00:20:27.585036 2176 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:27.617760 kubelet[2176]: I0711 00:20:27.617655 2176 apiserver.go:52] "Watching apiserver" Jul 11 00:20:27.671003 kubelet[2176]: I0711 00:20:27.670922 2176 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:20:27.898529 kubelet[2176]: I0711 00:20:27.898378 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:27.898976 kubelet[2176]: I0711 00:20:27.898627 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:20:27.901253 kubelet[2176]: E0711 00:20:27.901218 2176 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:27.901440 kubelet[2176]: E0711 00:20:27.901387 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:27.901737 kubelet[2176]: E0711 00:20:27.901709 2176 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:20:27.901890 kubelet[2176]: E0711 00:20:27.901873 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:28.900014 kubelet[2176]: I0711 00:20:28.899967 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:20:28.900610 kubelet[2176]: I0711 00:20:28.900188 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:28.936825 kubelet[2176]: E0711 00:20:28.936735 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:28.951024 kubelet[2176]: I0711 00:20:28.950952 2176 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:28.997288 kubelet[2176]: E0711 00:20:28.997239 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:29.000846 kubelet[2176]: E0711 00:20:29.000746 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:29.902027 kubelet[2176]: E0711 00:20:29.901958 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:29.902588 kubelet[2176]: E0711 00:20:29.902127 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:29.902588 kubelet[2176]: E0711 00:20:29.902142 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:32.449234 kubelet[2176]: I0711 00:20:32.449137 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.449098006 podStartE2EDuration="4.449098006s" podCreationTimestamp="2025-07-11 00:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:20:32.102688587 +0000 UTC m=+11.480727523" watchObservedRunningTime="2025-07-11 00:20:32.449098006 +0000 UTC m=+11.827136932" Jul 11 00:20:32.505654 kubelet[2176]: I0711 00:20:32.505471 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.505445301 podStartE2EDuration="4.505445301s" podCreationTimestamp="2025-07-11 00:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:20:32.449652313 +0000 UTC m=+11.827691239" watchObservedRunningTime="2025-07-11 00:20:32.505445301 +0000 UTC m=+11.883484228" Jul 11 00:20:33.618272 systemd[1]: Reloading requested from client PID 2484 ('systemctl') (unit session-9.scope)... Jul 11 00:20:33.618298 systemd[1]: Reloading... Jul 11 00:20:33.724163 zram_generator::config[2524]: No configuration found. Jul 11 00:20:33.881841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:20:33.990501 systemd[1]: Reloading finished in 371 ms. Jul 11 00:20:34.046921 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:20:34.069538 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:20:34.070010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:20:34.070210 systemd[1]: kubelet.service: Consumed 1.809s CPU time, 133.7M memory peak, 0B memory swap peak. Jul 11 00:20:34.082316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:20:34.330917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:20:34.353516 (kubelet)[2568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:20:34.399191 kubelet[2568]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:20:34.399191 kubelet[2568]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:20:34.399191 kubelet[2568]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:20:34.399690 kubelet[2568]: I0711 00:20:34.399251 2568 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:20:34.406715 kubelet[2568]: I0711 00:20:34.406665 2568 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 11 00:20:34.406715 kubelet[2568]: I0711 00:20:34.406696 2568 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:20:34.406949 kubelet[2568]: I0711 00:20:34.406925 2568 server.go:956] "Client rotation is on, will bootstrap in background" Jul 11 00:20:34.408111 kubelet[2568]: I0711 00:20:34.408060 2568 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 11 00:20:34.450260 kubelet[2568]: I0711 00:20:34.450143 2568 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:20:34.455964 kubelet[2568]: E0711 00:20:34.455916 2568 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:20:34.455964 kubelet[2568]: I0711 00:20:34.455958 2568 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:20:34.461445 kubelet[2568]: I0711 00:20:34.461392 2568 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:20:34.461671 kubelet[2568]: I0711 00:20:34.461629 2568 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:20:34.461841 kubelet[2568]: I0711 00:20:34.461665 2568 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:20:34.461841 kubelet[2568]: I0711 00:20:34.461830 2568 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:20:34.461841 kubelet[2568]: I0711 00:20:34.461841 2568 container_manager_linux.go:303] "Creating device plugin manager" Jul 11 00:20:34.462059 kubelet[2568]: I0711 00:20:34.461905 2568 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:20:34.462120 kubelet[2568]: I0711 00:20:34.462100 2568 kubelet.go:480] "Attempting to sync node with API server" Jul 11 00:20:34.462120 kubelet[2568]: I0711 00:20:34.462115 2568 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:20:34.462191 kubelet[2568]: I0711 00:20:34.462139 2568 kubelet.go:386] "Adding apiserver pod source" Jul 11 00:20:34.467404 kubelet[2568]: I0711 00:20:34.467266 2568 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:20:34.468959 kubelet[2568]: I0711 00:20:34.468752 2568 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:20:34.470027 kubelet[2568]: I0711 00:20:34.469390 2568 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 11 00:20:34.473583 kubelet[2568]: I0711 00:20:34.473556 2568 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:20:34.474238 kubelet[2568]: I0711 00:20:34.474047 2568 server.go:1289] "Started kubelet" Jul 11 00:20:34.474352 kubelet[2568]: I0711 00:20:34.474311 2568 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:20:34.474710 kubelet[2568]: I0711 00:20:34.474646 2568 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:20:34.475123 kubelet[2568]: I0711 00:20:34.475062 2568 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:20:34.477450 kubelet[2568]: I0711 00:20:34.475793 2568 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:20:34.477450 kubelet[2568]: I0711 00:20:34.476353 2568 server.go:317] "Adding debug handlers to kubelet server" Jul 11 00:20:34.477575 kubelet[2568]: I0711 00:20:34.477521 2568 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:20:34.479120 kubelet[2568]: I0711 00:20:34.479070 2568 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:20:34.480231 kubelet[2568]: I0711 00:20:34.479867 2568 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:20:34.480231 kubelet[2568]: I0711 00:20:34.480131 2568 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:20:34.481934 kubelet[2568]: I0711 00:20:34.481879 2568 factory.go:223] Registration of the systemd container factory successfully Jul 11 00:20:34.482385 kubelet[2568]: I0711 00:20:34.482339 2568 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:20:34.483387 kubelet[2568]: E0711 00:20:34.483341 2568 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:20:34.485522 kubelet[2568]: I0711 00:20:34.485478 2568 factory.go:223] Registration of the containerd container factory successfully Jul 11 00:20:34.487847 kubelet[2568]: I0711 00:20:34.487658 2568 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 11 00:20:34.500202 kubelet[2568]: I0711 00:20:34.499822 2568 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 11 00:20:34.500202 kubelet[2568]: I0711 00:20:34.499858 2568 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 11 00:20:34.500202 kubelet[2568]: I0711 00:20:34.499892 2568 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:20:34.500202 kubelet[2568]: I0711 00:20:34.499942 2568 kubelet.go:2436] "Starting kubelet main sync loop" Jul 11 00:20:34.500202 kubelet[2568]: E0711 00:20:34.500004 2568 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:20:34.521874 kubelet[2568]: I0711 00:20:34.521836 2568 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:20:34.521874 kubelet[2568]: I0711 00:20:34.521857 2568 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:20:34.521874 kubelet[2568]: I0711 00:20:34.521881 2568 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:20:34.522058 kubelet[2568]: I0711 00:20:34.522041 2568 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:20:34.522117 kubelet[2568]: I0711 00:20:34.522056 2568 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:20:34.522117 kubelet[2568]: I0711 00:20:34.522092 2568 policy_none.go:49] "None policy: Start" Jul 11 00:20:34.522117 kubelet[2568]: I0711 00:20:34.522103 2568 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:20:34.522117 kubelet[2568]: I0711 00:20:34.522114 2568 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:20:34.522239 kubelet[2568]: I0711 00:20:34.522199 2568 state_mem.go:75] "Updated machine memory state" Jul 11 00:20:34.526350 kubelet[2568]: E0711 00:20:34.526321 2568 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 11 00:20:34.526587 kubelet[2568]: I0711 00:20:34.526566 2568 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:20:34.526627 kubelet[2568]: I0711 00:20:34.526583 2568 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:20:34.526944 kubelet[2568]: I0711 00:20:34.526917 2568 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:20:34.529104 kubelet[2568]: E0711 00:20:34.528004 2568 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:20:34.601916 kubelet[2568]: I0711 00:20:34.601649 2568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:20:34.602260 kubelet[2568]: I0711 00:20:34.602210 2568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:34.602480 kubelet[2568]: I0711 00:20:34.601972 2568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:34.635582 kubelet[2568]: I0711 00:20:34.635357 2568 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:20:34.681276 kubelet[2568]: I0711 00:20:34.681201 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/967802bba9fa48dce642df136e10276d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"967802bba9fa48dce642df136e10276d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:34.681276 kubelet[2568]: I0711 00:20:34.681272 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/967802bba9fa48dce642df136e10276d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"967802bba9fa48dce642df136e10276d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:34.681578 kubelet[2568]: I0711 00:20:34.681377 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/967802bba9fa48dce642df136e10276d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"967802bba9fa48dce642df136e10276d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:34.681578 kubelet[2568]: I0711 00:20:34.681428 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:34.681578 kubelet[2568]: I0711 00:20:34.681469 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:34.681578 kubelet[2568]: I0711 00:20:34.681490 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:34.681578 kubelet[2568]: I0711 00:20:34.681569 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:34.681755 kubelet[2568]: I0711 00:20:34.681598 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:20:34.681755 kubelet[2568]: I0711 00:20:34.681683 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:34.724795 kubelet[2568]: E0711 00:20:34.724693 2568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:20:34.725051 kubelet[2568]: E0711 00:20:34.724979 2568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:34.725266 kubelet[2568]: E0711 00:20:34.725201 2568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:20:34.759424 kubelet[2568]: I0711 00:20:34.759305 2568 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:20:34.759424 kubelet[2568]: I0711 00:20:34.759454 2568 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:20:35.027020 kubelet[2568]: E0711 00:20:35.026614 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:35.027636 kubelet[2568]: E0711 00:20:35.026828 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:35.027636 kubelet[2568]: E0711 00:20:35.027130 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:35.468881 kubelet[2568]: I0711 00:20:35.468824 2568 apiserver.go:52] "Watching apiserver" Jul 11 00:20:35.480813 kubelet[2568]: I0711 00:20:35.480750 2568 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:20:35.510242 kubelet[2568]: I0711 00:20:35.509620 2568 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:35.510242 kubelet[2568]: E0711 00:20:35.509725 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:35.510705 kubelet[2568]: E0711 00:20:35.510660 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:35.773573 kubelet[2568]: E0711 00:20:35.773395 2568 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:20:35.773951 kubelet[2568]: E0711 00:20:35.773911 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:36.511383 kubelet[2568]: E0711 00:20:36.511314 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:36.511984 kubelet[2568]: E0711 00:20:36.511575 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:37.512865 kubelet[2568]: E0711 00:20:37.512832 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:37.925191 kubelet[2568]: I0711 00:20:37.925005 2568 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:20:37.925721 containerd[1471]: time="2025-07-11T00:20:37.925653273Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:20:37.926566 kubelet[2568]: I0711 00:20:37.926008 2568 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:20:38.513717 kubelet[2568]: E0711 00:20:38.513664 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:38.823026 kubelet[2568]: E0711 00:20:38.822788 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:39.515102 kubelet[2568]: E0711 00:20:39.515037 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:41.184576 systemd[1]: Created slice kubepods-besteffort-pod0b70a20e_0f79_4a45_83b3_68aed593737d.slice - libcontainer container kubepods-besteffort-pod0b70a20e_0f79_4a45_83b3_68aed593737d.slice. Jul 11 00:20:41.225917 kubelet[2568]: I0711 00:20:41.224541 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b70a20e-0f79-4a45-83b3-68aed593737d-kube-proxy\") pod \"kube-proxy-x49rt\" (UID: \"0b70a20e-0f79-4a45-83b3-68aed593737d\") " pod="kube-system/kube-proxy-x49rt" Jul 11 00:20:41.225917 kubelet[2568]: I0711 00:20:41.224748 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b70a20e-0f79-4a45-83b3-68aed593737d-xtables-lock\") pod \"kube-proxy-x49rt\" (UID: \"0b70a20e-0f79-4a45-83b3-68aed593737d\") " pod="kube-system/kube-proxy-x49rt" Jul 11 00:20:41.226570 kubelet[2568]: I0711 00:20:41.226496 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b70a20e-0f79-4a45-83b3-68aed593737d-lib-modules\") pod \"kube-proxy-x49rt\" (UID: \"0b70a20e-0f79-4a45-83b3-68aed593737d\") " pod="kube-system/kube-proxy-x49rt" Jul 11 00:20:41.226570 kubelet[2568]: I0711 00:20:41.226525 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcmkg\" (UniqueName: \"kubernetes.io/projected/0b70a20e-0f79-4a45-83b3-68aed593737d-kube-api-access-wcmkg\") pod \"kube-proxy-x49rt\" (UID: \"0b70a20e-0f79-4a45-83b3-68aed593737d\") " pod="kube-system/kube-proxy-x49rt" Jul 11 00:20:41.804832 kubelet[2568]: E0711 00:20:41.804761 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:41.806817 containerd[1471]: time="2025-07-11T00:20:41.806764151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x49rt,Uid:0b70a20e-0f79-4a45-83b3-68aed593737d,Namespace:kube-system,Attempt:0,}" Jul 11 00:20:41.821554 systemd[1]: Created slice kubepods-besteffort-pod315847ce_8add_48d3_94ec_6cb5235bc513.slice - libcontainer container kubepods-besteffort-pod315847ce_8add_48d3_94ec_6cb5235bc513.slice. Jul 11 00:20:41.831729 kubelet[2568]: I0711 00:20:41.831638 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcqqx\" (UniqueName: \"kubernetes.io/projected/315847ce-8add-48d3-94ec-6cb5235bc513-kube-api-access-mcqqx\") pod \"tigera-operator-747864d56d-trw8p\" (UID: \"315847ce-8add-48d3-94ec-6cb5235bc513\") " pod="tigera-operator/tigera-operator-747864d56d-trw8p" Jul 11 00:20:41.831729 kubelet[2568]: I0711 00:20:41.831710 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/315847ce-8add-48d3-94ec-6cb5235bc513-var-lib-calico\") pod \"tigera-operator-747864d56d-trw8p\" (UID: \"315847ce-8add-48d3-94ec-6cb5235bc513\") " pod="tigera-operator/tigera-operator-747864d56d-trw8p" Jul 11 00:20:42.035114 containerd[1471]: time="2025-07-11T00:20:42.034920043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:20:42.035114 containerd[1471]: time="2025-07-11T00:20:42.035029609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:20:42.035114 containerd[1471]: time="2025-07-11T00:20:42.035043305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:20:42.035375 containerd[1471]: time="2025-07-11T00:20:42.035184831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:20:42.065438 systemd[1]: Started cri-containerd-5604e91dd6321497dce70e8cc176af4a2ed9256eb9e6b139472ce88a6386ed00.scope - libcontainer container 5604e91dd6321497dce70e8cc176af4a2ed9256eb9e6b139472ce88a6386ed00. Jul 11 00:20:42.093600 containerd[1471]: time="2025-07-11T00:20:42.093513837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x49rt,Uid:0b70a20e-0f79-4a45-83b3-68aed593737d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5604e91dd6321497dce70e8cc176af4a2ed9256eb9e6b139472ce88a6386ed00\"" Jul 11 00:20:42.094459 kubelet[2568]: E0711 00:20:42.094422 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:42.127175 containerd[1471]: time="2025-07-11T00:20:42.127091097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-trw8p,Uid:315847ce-8add-48d3-94ec-6cb5235bc513,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:20:42.138838 containerd[1471]: time="2025-07-11T00:20:42.138771296Z" level=info msg="CreateContainer within sandbox \"5604e91dd6321497dce70e8cc176af4a2ed9256eb9e6b139472ce88a6386ed00\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:20:42.232438 containerd[1471]: time="2025-07-11T00:20:42.232360789Z" level=info msg="CreateContainer within sandbox \"5604e91dd6321497dce70e8cc176af4a2ed9256eb9e6b139472ce88a6386ed00\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3abc4b9f284f6c4108c2a4db1630af7cfc96e8f6a1245349aad3a12db69b2574\"" Jul 11 00:20:42.233606 containerd[1471]: time="2025-07-11T00:20:42.233475988Z" level=info msg="StartContainer for \"3abc4b9f284f6c4108c2a4db1630af7cfc96e8f6a1245349aad3a12db69b2574\"" Jul 11 00:20:42.248747 containerd[1471]: time="2025-07-11T00:20:42.248631870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:20:42.248747 containerd[1471]: time="2025-07-11T00:20:42.248705058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:20:42.248747 containerd[1471]: time="2025-07-11T00:20:42.248739282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:20:42.249017 containerd[1471]: time="2025-07-11T00:20:42.248958115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:20:42.277426 systemd[1]: Started cri-containerd-3abc4b9f284f6c4108c2a4db1630af7cfc96e8f6a1245349aad3a12db69b2574.scope - libcontainer container 3abc4b9f284f6c4108c2a4db1630af7cfc96e8f6a1245349aad3a12db69b2574. Jul 11 00:20:42.279927 systemd[1]: Started cri-containerd-d7d8318bdd4973f14f13524196a414d08bcf8a2b8ddfa68e2fa722e9d593859c.scope - libcontainer container d7d8318bdd4973f14f13524196a414d08bcf8a2b8ddfa68e2fa722e9d593859c. Jul 11 00:20:42.465670 containerd[1471]: time="2025-07-11T00:20:42.465601339Z" level=info msg="StartContainer for \"3abc4b9f284f6c4108c2a4db1630af7cfc96e8f6a1245349aad3a12db69b2574\" returns successfully" Jul 11 00:20:42.465881 containerd[1471]: time="2025-07-11T00:20:42.465606299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-trw8p,Uid:315847ce-8add-48d3-94ec-6cb5235bc513,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d7d8318bdd4973f14f13524196a414d08bcf8a2b8ddfa68e2fa722e9d593859c\"" Jul 11 00:20:42.468827 containerd[1471]: time="2025-07-11T00:20:42.468776217Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:20:42.536729 kubelet[2568]: E0711 00:20:42.536667 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:42.579335 kubelet[2568]: I0711 00:20:42.579252 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x49rt" podStartSLOduration=3.579227984 podStartE2EDuration="3.579227984s" podCreationTimestamp="2025-07-11 00:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:20:42.579058905 +0000 UTC m=+8.219269434" watchObservedRunningTime="2025-07-11 00:20:42.579227984 +0000 UTC m=+8.219438512" Jul 11 00:20:43.634805 kubelet[2568]: E0711 00:20:43.634696 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:44.542532 kubelet[2568]: E0711 00:20:44.542479 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:45.544457 kubelet[2568]: E0711 00:20:45.544414 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:20:49.378518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624476816.mount: Deactivated successfully. Jul 11 00:20:53.916650 containerd[1471]: time="2025-07-11T00:20:53.916550877Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:54.043853 containerd[1471]: time="2025-07-11T00:20:54.043725429Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 11 00:20:54.232287 containerd[1471]: time="2025-07-11T00:20:54.232069076Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:54.338496 containerd[1471]: time="2025-07-11T00:20:54.338318222Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:20:54.339365 containerd[1471]: time="2025-07-11T00:20:54.339300808Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 11.870471091s" Jul 11 00:20:54.339365 containerd[1471]: time="2025-07-11T00:20:54.339348167Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 11 00:20:54.357172 containerd[1471]: time="2025-07-11T00:20:54.357104763Z" level=info msg="CreateContainer within sandbox \"d7d8318bdd4973f14f13524196a414d08bcf8a2b8ddfa68e2fa722e9d593859c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:20:55.355596 containerd[1471]: time="2025-07-11T00:20:55.355489697Z" level=info msg="CreateContainer within sandbox \"d7d8318bdd4973f14f13524196a414d08bcf8a2b8ddfa68e2fa722e9d593859c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0fdd213565c03bb3f02e6eadb676509d46755d40741ce9631988ae51c80414d1\"" Jul 11 00:20:55.356228 containerd[1471]: time="2025-07-11T00:20:55.356185063Z" level=info msg="StartContainer for \"0fdd213565c03bb3f02e6eadb676509d46755d40741ce9631988ae51c80414d1\"" Jul 11 00:20:55.392271 systemd[1]: Started cri-containerd-0fdd213565c03bb3f02e6eadb676509d46755d40741ce9631988ae51c80414d1.scope - libcontainer container 0fdd213565c03bb3f02e6eadb676509d46755d40741ce9631988ae51c80414d1. Jul 11 00:20:55.653003 containerd[1471]: time="2025-07-11T00:20:55.652785413Z" level=info msg="StartContainer for \"0fdd213565c03bb3f02e6eadb676509d46755d40741ce9631988ae51c80414d1\" returns successfully" Jul 11 00:20:57.689107 kubelet[2568]: I0711 00:20:57.688999 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-trw8p" podStartSLOduration=5.816795335 podStartE2EDuration="17.688980959s" podCreationTimestamp="2025-07-11 00:20:40 +0000 UTC" firstStartedPulling="2025-07-11 00:20:42.46829434 +0000 UTC m=+8.108504868" lastFinishedPulling="2025-07-11 00:20:54.340479963 +0000 UTC m=+19.980690492" observedRunningTime="2025-07-11 00:20:57.688727905 +0000 UTC m=+23.328938443" watchObservedRunningTime="2025-07-11 00:20:57.688980959 +0000 UTC m=+23.329191487" Jul 11 00:21:12.470876 sudo[1663]: pam_unix(sudo:session): session closed for user root Jul 11 00:21:12.688320 sshd[1660]: pam_unix(sshd:session): session closed for user core Jul 11 00:21:12.693709 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:48504.service: Deactivated successfully. Jul 11 00:21:12.697522 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:21:12.697817 systemd[1]: session-9.scope: Consumed 7.612s CPU time, 163.0M memory peak, 0B memory swap peak. Jul 11 00:21:12.698647 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:21:12.700185 systemd-logind[1453]: Removed session 9. Jul 11 00:21:26.649003 systemd[1]: Created slice kubepods-besteffort-pod12c8a21e_18bf_45df_87d2_40fff30bb435.slice - libcontainer container kubepods-besteffort-pod12c8a21e_18bf_45df_87d2_40fff30bb435.slice. Jul 11 00:21:26.697276 systemd[1]: Created slice kubepods-besteffort-poddc05c89f_083b_4f71_a8ea_bbff79362cdf.slice - libcontainer container kubepods-besteffort-poddc05c89f_083b_4f71_a8ea_bbff79362cdf.slice. Jul 11 00:21:26.750047 kubelet[2568]: I0711 00:21:26.749955 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/12c8a21e-18bf-45df-87d2-40fff30bb435-typha-certs\") pod \"calico-typha-546c679459-wgq9f\" (UID: \"12c8a21e-18bf-45df-87d2-40fff30bb435\") " pod="calico-system/calico-typha-546c679459-wgq9f" Jul 11 00:21:26.750047 kubelet[2568]: I0711 00:21:26.750016 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97rpl\" (UniqueName: \"kubernetes.io/projected/12c8a21e-18bf-45df-87d2-40fff30bb435-kube-api-access-97rpl\") pod \"calico-typha-546c679459-wgq9f\" (UID: \"12c8a21e-18bf-45df-87d2-40fff30bb435\") " pod="calico-system/calico-typha-546c679459-wgq9f" Jul 11 00:21:26.750047 kubelet[2568]: I0711 00:21:26.750051 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12c8a21e-18bf-45df-87d2-40fff30bb435-tigera-ca-bundle\") pod \"calico-typha-546c679459-wgq9f\" (UID: \"12c8a21e-18bf-45df-87d2-40fff30bb435\") " pod="calico-system/calico-typha-546c679459-wgq9f" Jul 11 00:21:26.850842 kubelet[2568]: I0711 00:21:26.850768 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtbnk\" (UniqueName: \"kubernetes.io/projected/dc05c89f-083b-4f71-a8ea-bbff79362cdf-kube-api-access-rtbnk\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.850842 kubelet[2568]: I0711 00:21:26.850827 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dc05c89f-083b-4f71-a8ea-bbff79362cdf-flexvol-driver-host\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.851162 kubelet[2568]: I0711 00:21:26.850868 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dc05c89f-083b-4f71-a8ea-bbff79362cdf-cni-log-dir\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.851162 kubelet[2568]: I0711 00:21:26.850892 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc05c89f-083b-4f71-a8ea-bbff79362cdf-lib-modules\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.851162 kubelet[2568]: I0711 00:21:26.851019 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dc05c89f-083b-4f71-a8ea-bbff79362cdf-var-run-calico\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.851162 kubelet[2568]: I0711 00:21:26.851097 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc05c89f-083b-4f71-a8ea-bbff79362cdf-xtables-lock\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.851162 kubelet[2568]: I0711 00:21:26.851155 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dc05c89f-083b-4f71-a8ea-bbff79362cdf-node-certs\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.851351 kubelet[2568]: I0711 00:21:26.851170 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc05c89f-083b-4f71-a8ea-bbff79362cdf-tigera-ca-bundle\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.851351 kubelet[2568]: I0711 00:21:26.851238 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dc05c89f-083b-4f71-a8ea-bbff79362cdf-cni-bin-dir\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.851351 kubelet[2568]: I0711 00:21:26.851256 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dc05c89f-083b-4f71-a8ea-bbff79362cdf-cni-net-dir\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.851351 kubelet[2568]: I0711 00:21:26.851273 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dc05c89f-083b-4f71-a8ea-bbff79362cdf-policysync\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:26.851351 kubelet[2568]: I0711 00:21:26.851298 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc05c89f-083b-4f71-a8ea-bbff79362cdf-var-lib-calico\") pod \"calico-node-rjx2b\" (UID: \"dc05c89f-083b-4f71-a8ea-bbff79362cdf\") " pod="calico-system/calico-node-rjx2b" Jul 11 00:21:27.127481 kubelet[2568]: E0711 00:21:27.127421 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.127481 kubelet[2568]: W0711 00:21:27.127446 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.127481 kubelet[2568]: E0711 00:21:27.127478 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.128330 kubelet[2568]: E0711 00:21:27.128189 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.128330 kubelet[2568]: W0711 00:21:27.128253 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.128330 kubelet[2568]: E0711 00:21:27.128283 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.254157 kubelet[2568]: E0711 00:21:27.254107 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:27.259239 containerd[1471]: time="2025-07-11T00:21:27.259190575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-546c679459-wgq9f,Uid:12c8a21e-18bf-45df-87d2-40fff30bb435,Namespace:calico-system,Attempt:0,}" Jul 11 00:21:27.305413 containerd[1471]: time="2025-07-11T00:21:27.305360286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rjx2b,Uid:dc05c89f-083b-4f71-a8ea-bbff79362cdf,Namespace:calico-system,Attempt:0,}" Jul 11 00:21:27.698215 kubelet[2568]: E0711 00:21:27.698127 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:27.757389 kubelet[2568]: E0711 00:21:27.757349 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.757389 kubelet[2568]: W0711 00:21:27.757378 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.757842 kubelet[2568]: E0711 00:21:27.757411 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.757842 kubelet[2568]: E0711 00:21:27.757749 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.757842 kubelet[2568]: W0711 00:21:27.757775 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.757842 kubelet[2568]: E0711 00:21:27.757806 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.758147 kubelet[2568]: E0711 00:21:27.758130 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.758147 kubelet[2568]: W0711 00:21:27.758143 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.758208 kubelet[2568]: E0711 00:21:27.758154 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.758458 kubelet[2568]: E0711 00:21:27.758435 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.758458 kubelet[2568]: W0711 00:21:27.758449 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.758511 kubelet[2568]: E0711 00:21:27.758459 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.758724 kubelet[2568]: E0711 00:21:27.758701 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.758724 kubelet[2568]: W0711 00:21:27.758714 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.758724 kubelet[2568]: E0711 00:21:27.758722 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.758930 kubelet[2568]: E0711 00:21:27.758915 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.758930 kubelet[2568]: W0711 00:21:27.758926 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.758992 kubelet[2568]: E0711 00:21:27.758934 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.759172 kubelet[2568]: E0711 00:21:27.759157 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.759172 kubelet[2568]: W0711 00:21:27.759168 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.759224 kubelet[2568]: E0711 00:21:27.759178 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.759402 kubelet[2568]: E0711 00:21:27.759387 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.759402 kubelet[2568]: W0711 00:21:27.759400 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.759451 kubelet[2568]: E0711 00:21:27.759411 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.759688 kubelet[2568]: E0711 00:21:27.759660 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.759688 kubelet[2568]: W0711 00:21:27.759673 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.759688 kubelet[2568]: E0711 00:21:27.759682 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.759890 kubelet[2568]: E0711 00:21:27.759876 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.759890 kubelet[2568]: W0711 00:21:27.759887 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.759941 kubelet[2568]: E0711 00:21:27.759895 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.760138 kubelet[2568]: E0711 00:21:27.760122 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.760138 kubelet[2568]: W0711 00:21:27.760134 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.760204 kubelet[2568]: E0711 00:21:27.760144 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.760371 kubelet[2568]: E0711 00:21:27.760353 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.760371 kubelet[2568]: W0711 00:21:27.760368 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.760416 kubelet[2568]: E0711 00:21:27.760381 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.760636 kubelet[2568]: E0711 00:21:27.760617 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.760663 kubelet[2568]: W0711 00:21:27.760633 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.760663 kubelet[2568]: E0711 00:21:27.760646 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.760873 kubelet[2568]: E0711 00:21:27.760858 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.760873 kubelet[2568]: W0711 00:21:27.760870 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.760921 kubelet[2568]: E0711 00:21:27.760879 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.761112 kubelet[2568]: E0711 00:21:27.761096 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.761112 kubelet[2568]: W0711 00:21:27.761108 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.761163 kubelet[2568]: E0711 00:21:27.761117 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.761343 kubelet[2568]: E0711 00:21:27.761328 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.761343 kubelet[2568]: W0711 00:21:27.761338 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.761388 kubelet[2568]: E0711 00:21:27.761347 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.761594 kubelet[2568]: E0711 00:21:27.761575 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.761594 kubelet[2568]: W0711 00:21:27.761588 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.761654 kubelet[2568]: E0711 00:21:27.761598 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.761816 kubelet[2568]: E0711 00:21:27.761798 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.761816 kubelet[2568]: W0711 00:21:27.761813 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.761866 kubelet[2568]: E0711 00:21:27.761824 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.762047 kubelet[2568]: E0711 00:21:27.762032 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.762047 kubelet[2568]: W0711 00:21:27.762043 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.762116 kubelet[2568]: E0711 00:21:27.762051 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.762301 kubelet[2568]: E0711 00:21:27.762285 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.762301 kubelet[2568]: W0711 00:21:27.762296 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.762342 kubelet[2568]: E0711 00:21:27.762305 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.762622 kubelet[2568]: E0711 00:21:27.762597 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.762622 kubelet[2568]: W0711 00:21:27.762611 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.762622 kubelet[2568]: E0711 00:21:27.762621 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.762700 kubelet[2568]: I0711 00:21:27.762645 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef0f4240-50c1-431a-b911-54802b65a3ca-registration-dir\") pod \"csi-node-driver-24rnq\" (UID: \"ef0f4240-50c1-431a-b911-54802b65a3ca\") " pod="calico-system/csi-node-driver-24rnq" Jul 11 00:21:27.762898 kubelet[2568]: E0711 00:21:27.762875 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.762898 kubelet[2568]: W0711 00:21:27.762888 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.762898 kubelet[2568]: E0711 00:21:27.762897 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.762986 kubelet[2568]: I0711 00:21:27.762916 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef0f4240-50c1-431a-b911-54802b65a3ca-kubelet-dir\") pod \"csi-node-driver-24rnq\" (UID: \"ef0f4240-50c1-431a-b911-54802b65a3ca\") " pod="calico-system/csi-node-driver-24rnq" Jul 11 00:21:27.763205 kubelet[2568]: E0711 00:21:27.763186 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.763205 kubelet[2568]: W0711 00:21:27.763202 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.763283 kubelet[2568]: E0711 00:21:27.763215 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.763436 kubelet[2568]: E0711 00:21:27.763418 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.763436 kubelet[2568]: W0711 00:21:27.763428 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.763436 kubelet[2568]: E0711 00:21:27.763437 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.763651 kubelet[2568]: E0711 00:21:27.763636 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.763651 kubelet[2568]: W0711 00:21:27.763647 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.763701 kubelet[2568]: E0711 00:21:27.763655 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.763701 kubelet[2568]: I0711 00:21:27.763676 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ef0f4240-50c1-431a-b911-54802b65a3ca-varrun\") pod \"csi-node-driver-24rnq\" (UID: \"ef0f4240-50c1-431a-b911-54802b65a3ca\") " pod="calico-system/csi-node-driver-24rnq" Jul 11 00:21:27.763994 kubelet[2568]: E0711 00:21:27.763962 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.763994 kubelet[2568]: W0711 00:21:27.763987 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.764042 kubelet[2568]: E0711 00:21:27.764013 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.764282 kubelet[2568]: E0711 00:21:27.764268 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.764282 kubelet[2568]: W0711 00:21:27.764280 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.764351 kubelet[2568]: E0711 00:21:27.764291 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.764557 kubelet[2568]: E0711 00:21:27.764537 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.764557 kubelet[2568]: W0711 00:21:27.764548 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.764606 kubelet[2568]: E0711 00:21:27.764558 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.764630 kubelet[2568]: I0711 00:21:27.764594 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lknvl\" (UniqueName: \"kubernetes.io/projected/ef0f4240-50c1-431a-b911-54802b65a3ca-kube-api-access-lknvl\") pod \"csi-node-driver-24rnq\" (UID: \"ef0f4240-50c1-431a-b911-54802b65a3ca\") " pod="calico-system/csi-node-driver-24rnq" Jul 11 00:21:27.764802 kubelet[2568]: E0711 00:21:27.764783 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.764802 kubelet[2568]: W0711 00:21:27.764798 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.764854 kubelet[2568]: E0711 00:21:27.764809 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.764996 kubelet[2568]: E0711 00:21:27.764979 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.764996 kubelet[2568]: W0711 00:21:27.764993 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.765045 kubelet[2568]: E0711 00:21:27.765005 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.765265 kubelet[2568]: E0711 00:21:27.765248 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.765265 kubelet[2568]: W0711 00:21:27.765259 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.765328 kubelet[2568]: E0711 00:21:27.765268 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.765328 kubelet[2568]: I0711 00:21:27.765289 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef0f4240-50c1-431a-b911-54802b65a3ca-socket-dir\") pod \"csi-node-driver-24rnq\" (UID: \"ef0f4240-50c1-431a-b911-54802b65a3ca\") " pod="calico-system/csi-node-driver-24rnq" Jul 11 00:21:27.765533 kubelet[2568]: E0711 00:21:27.765516 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.765555 kubelet[2568]: W0711 00:21:27.765532 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.765555 kubelet[2568]: E0711 00:21:27.765543 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.765763 kubelet[2568]: E0711 00:21:27.765751 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.765795 kubelet[2568]: W0711 00:21:27.765762 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.765795 kubelet[2568]: E0711 00:21:27.765785 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.766021 kubelet[2568]: E0711 00:21:27.766009 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.766047 kubelet[2568]: W0711 00:21:27.766020 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.766047 kubelet[2568]: E0711 00:21:27.766029 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.766294 kubelet[2568]: E0711 00:21:27.766281 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.766336 kubelet[2568]: W0711 00:21:27.766292 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.766336 kubelet[2568]: E0711 00:21:27.766302 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.866867 kubelet[2568]: E0711 00:21:27.866819 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.866867 kubelet[2568]: W0711 00:21:27.866855 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.867958 kubelet[2568]: E0711 00:21:27.867568 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.868018 kubelet[2568]: E0711 00:21:27.867977 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.868018 kubelet[2568]: W0711 00:21:27.867990 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.868018 kubelet[2568]: E0711 00:21:27.868007 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.868384 kubelet[2568]: E0711 00:21:27.868356 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.868384 kubelet[2568]: W0711 00:21:27.868375 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.868465 kubelet[2568]: E0711 00:21:27.868390 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.868904 kubelet[2568]: E0711 00:21:27.868877 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.868904 kubelet[2568]: W0711 00:21:27.868895 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.869001 kubelet[2568]: E0711 00:21:27.868908 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.869324 kubelet[2568]: E0711 00:21:27.869297 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.869324 kubelet[2568]: W0711 00:21:27.869315 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.869416 kubelet[2568]: E0711 00:21:27.869327 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.869799 kubelet[2568]: E0711 00:21:27.869676 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.869799 kubelet[2568]: W0711 00:21:27.869691 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.869799 kubelet[2568]: E0711 00:21:27.869702 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.872114 kubelet[2568]: E0711 00:21:27.870007 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.872114 kubelet[2568]: W0711 00:21:27.870023 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.872114 kubelet[2568]: E0711 00:21:27.870035 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.872114 kubelet[2568]: E0711 00:21:27.870352 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.872114 kubelet[2568]: W0711 00:21:27.870363 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.872114 kubelet[2568]: E0711 00:21:27.870374 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.872114 kubelet[2568]: E0711 00:21:27.870688 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.872114 kubelet[2568]: W0711 00:21:27.870699 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.872114 kubelet[2568]: E0711 00:21:27.870711 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.872114 kubelet[2568]: E0711 00:21:27.871028 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.872485 kubelet[2568]: W0711 00:21:27.871041 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.872485 kubelet[2568]: E0711 00:21:27.871053 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.872485 kubelet[2568]: E0711 00:21:27.871370 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.872485 kubelet[2568]: W0711 00:21:27.871382 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.872485 kubelet[2568]: E0711 00:21:27.871393 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.872485 kubelet[2568]: E0711 00:21:27.871677 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.872485 kubelet[2568]: W0711 00:21:27.871689 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.872485 kubelet[2568]: E0711 00:21:27.871700 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.872485 kubelet[2568]: E0711 00:21:27.871965 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.872485 kubelet[2568]: W0711 00:21:27.871976 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.872782 kubelet[2568]: E0711 00:21:27.871991 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.872782 kubelet[2568]: E0711 00:21:27.872333 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.872782 kubelet[2568]: W0711 00:21:27.872345 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.872782 kubelet[2568]: E0711 00:21:27.872357 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.872782 kubelet[2568]: E0711 00:21:27.872621 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.872782 kubelet[2568]: W0711 00:21:27.872632 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.872782 kubelet[2568]: E0711 00:21:27.872644 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.872985 kubelet[2568]: E0711 00:21:27.872938 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.872985 kubelet[2568]: W0711 00:21:27.872949 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.872985 kubelet[2568]: E0711 00:21:27.872960 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.873320 kubelet[2568]: E0711 00:21:27.873294 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.873320 kubelet[2568]: W0711 00:21:27.873311 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.873399 kubelet[2568]: E0711 00:21:27.873323 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.873648 kubelet[2568]: E0711 00:21:27.873623 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.873648 kubelet[2568]: W0711 00:21:27.873639 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.873752 kubelet[2568]: E0711 00:21:27.873652 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.873938 kubelet[2568]: E0711 00:21:27.873914 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.873989 kubelet[2568]: W0711 00:21:27.873947 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.873989 kubelet[2568]: E0711 00:21:27.873960 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.874520 kubelet[2568]: E0711 00:21:27.874492 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.874520 kubelet[2568]: W0711 00:21:27.874510 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.874624 kubelet[2568]: E0711 00:21:27.874524 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.877183 kubelet[2568]: E0711 00:21:27.877149 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.877183 kubelet[2568]: W0711 00:21:27.877165 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.877183 kubelet[2568]: E0711 00:21:27.877176 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.877803 kubelet[2568]: E0711 00:21:27.877741 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.877803 kubelet[2568]: W0711 00:21:27.877795 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.877901 kubelet[2568]: E0711 00:21:27.877826 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.878270 kubelet[2568]: E0711 00:21:27.878247 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.878270 kubelet[2568]: W0711 00:21:27.878264 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.878364 kubelet[2568]: E0711 00:21:27.878277 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.878711 kubelet[2568]: E0711 00:21:27.878668 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.878711 kubelet[2568]: W0711 00:21:27.878686 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.878910 kubelet[2568]: E0711 00:21:27.878713 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.879018 kubelet[2568]: E0711 00:21:27.878974 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.879018 kubelet[2568]: W0711 00:21:27.879012 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.879139 kubelet[2568]: E0711 00:21:27.879025 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:27.896504 kubelet[2568]: E0711 00:21:27.896409 2568 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:21:27.896504 kubelet[2568]: W0711 00:21:27.896436 2568 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:21:27.896504 kubelet[2568]: E0711 00:21:27.896459 2568 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:21:28.126517 containerd[1471]: time="2025-07-11T00:21:28.126421633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:21:28.127860 containerd[1471]: time="2025-07-11T00:21:28.127566254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:21:28.127860 containerd[1471]: time="2025-07-11T00:21:28.127775485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:21:28.128247 containerd[1471]: time="2025-07-11T00:21:28.128208774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:21:28.148749 containerd[1471]: time="2025-07-11T00:21:28.148208005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:21:28.148749 containerd[1471]: time="2025-07-11T00:21:28.148382383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:21:28.148749 containerd[1471]: time="2025-07-11T00:21:28.148402224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:21:28.148749 containerd[1471]: time="2025-07-11T00:21:28.148538434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:21:28.158690 systemd[1]: Started cri-containerd-c2dca053786da16627db8fcb6193fcaa525f48de4845a024ff97e35b71e78a98.scope - libcontainer container c2dca053786da16627db8fcb6193fcaa525f48de4845a024ff97e35b71e78a98. Jul 11 00:21:28.187909 systemd[1]: Started cri-containerd-c7f18d106c974de5a33e481be0fce18a32a4be1279dc514c808caaeb4e1a4f2e.scope - libcontainer container c7f18d106c974de5a33e481be0fce18a32a4be1279dc514c808caaeb4e1a4f2e. Jul 11 00:21:28.249254 containerd[1471]: time="2025-07-11T00:21:28.249044171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rjx2b,Uid:dc05c89f-083b-4f71-a8ea-bbff79362cdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7f18d106c974de5a33e481be0fce18a32a4be1279dc514c808caaeb4e1a4f2e\"" Jul 11 00:21:28.255524 containerd[1471]: time="2025-07-11T00:21:28.255449030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-546c679459-wgq9f,Uid:12c8a21e-18bf-45df-87d2-40fff30bb435,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2dca053786da16627db8fcb6193fcaa525f48de4845a024ff97e35b71e78a98\"" Jul 11 00:21:28.257481 kubelet[2568]: E0711 00:21:28.257437 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:28.258410 containerd[1471]: time="2025-07-11T00:21:28.258368878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:21:29.500790 kubelet[2568]: E0711 00:21:29.500697 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:31.500851 kubelet[2568]: E0711 00:21:31.500768 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:32.918500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2842694775.mount: Deactivated successfully. Jul 11 00:21:33.468293 containerd[1471]: time="2025-07-11T00:21:33.468170347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:33.470751 containerd[1471]: time="2025-07-11T00:21:33.470637602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Jul 11 00:21:33.473829 containerd[1471]: time="2025-07-11T00:21:33.473748365Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:33.478543 containerd[1471]: time="2025-07-11T00:21:33.478464562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:33.479469 containerd[1471]: time="2025-07-11T00:21:33.479411297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 5.22099988s" Jul 11 00:21:33.479532 containerd[1471]: time="2025-07-11T00:21:33.479471388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 11 00:21:33.480885 containerd[1471]: time="2025-07-11T00:21:33.480828507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:21:33.489491 containerd[1471]: time="2025-07-11T00:21:33.489426534Z" level=info msg="CreateContainer within sandbox \"c7f18d106c974de5a33e481be0fce18a32a4be1279dc514c808caaeb4e1a4f2e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:21:33.501562 kubelet[2568]: E0711 00:21:33.501105 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:33.521944 containerd[1471]: time="2025-07-11T00:21:33.521837314Z" level=info msg="CreateContainer within sandbox \"c7f18d106c974de5a33e481be0fce18a32a4be1279dc514c808caaeb4e1a4f2e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a22027c8b2b3dbd9b365648110a067790004c0c0b6683e981cfcf63d9dc9717f\"" Jul 11 00:21:33.522822 containerd[1471]: time="2025-07-11T00:21:33.522719968Z" level=info msg="StartContainer for \"a22027c8b2b3dbd9b365648110a067790004c0c0b6683e981cfcf63d9dc9717f\"" Jul 11 00:21:33.560381 systemd[1]: Started cri-containerd-a22027c8b2b3dbd9b365648110a067790004c0c0b6683e981cfcf63d9dc9717f.scope - libcontainer container a22027c8b2b3dbd9b365648110a067790004c0c0b6683e981cfcf63d9dc9717f. Jul 11 00:21:33.606987 containerd[1471]: time="2025-07-11T00:21:33.606899326Z" level=info msg="StartContainer for \"a22027c8b2b3dbd9b365648110a067790004c0c0b6683e981cfcf63d9dc9717f\" returns successfully" Jul 11 00:21:33.621824 systemd[1]: cri-containerd-a22027c8b2b3dbd9b365648110a067790004c0c0b6683e981cfcf63d9dc9717f.scope: Deactivated successfully. Jul 11 00:21:33.798570 containerd[1471]: time="2025-07-11T00:21:33.793846545Z" level=info msg="shim disconnected" id=a22027c8b2b3dbd9b365648110a067790004c0c0b6683e981cfcf63d9dc9717f namespace=k8s.io Jul 11 00:21:33.798570 containerd[1471]: time="2025-07-11T00:21:33.798445623Z" level=warning msg="cleaning up after shim disconnected" id=a22027c8b2b3dbd9b365648110a067790004c0c0b6683e981cfcf63d9dc9717f namespace=k8s.io Jul 11 00:21:33.798570 containerd[1471]: time="2025-07-11T00:21:33.798468360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:21:33.886159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a22027c8b2b3dbd9b365648110a067790004c0c0b6683e981cfcf63d9dc9717f-rootfs.mount: Deactivated successfully. Jul 11 00:21:35.501201 kubelet[2568]: E0711 00:21:35.501119 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:37.502890 kubelet[2568]: E0711 00:21:37.501099 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:40.639468 kubelet[2568]: E0711 00:21:39.500826 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:41.382767 kubelet[2568]: E0711 00:21:41.382712 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:41.500897 kubelet[2568]: E0711 00:21:41.500815 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:42.420744 kubelet[2568]: E0711 00:21:42.420664 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:43.501211 kubelet[2568]: E0711 00:21:43.501150 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:44.479000 containerd[1471]: time="2025-07-11T00:21:44.478930821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:44.481182 containerd[1471]: time="2025-07-11T00:21:44.481103155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33740523" Jul 11 00:21:44.483542 containerd[1471]: time="2025-07-11T00:21:44.483312733Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:44.486144 containerd[1471]: time="2025-07-11T00:21:44.485810877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:44.487324 containerd[1471]: time="2025-07-11T00:21:44.487257394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 11.006382302s" Jul 11 00:21:44.487324 containerd[1471]: time="2025-07-11T00:21:44.487300029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 11 00:21:44.488522 containerd[1471]: time="2025-07-11T00:21:44.488436556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:21:44.511503 containerd[1471]: time="2025-07-11T00:21:44.511303007Z" level=info msg="CreateContainer within sandbox \"c2dca053786da16627db8fcb6193fcaa525f48de4845a024ff97e35b71e78a98\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:21:44.535727 containerd[1471]: time="2025-07-11T00:21:44.535566334Z" level=info msg="CreateContainer within sandbox \"c2dca053786da16627db8fcb6193fcaa525f48de4845a024ff97e35b71e78a98\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b648fe85c1337c9d421b994f9f8224934d12a17c71fc53af5c5de9753680ce47\"" Jul 11 00:21:44.536370 containerd[1471]: time="2025-07-11T00:21:44.536324436Z" level=info msg="StartContainer for \"b648fe85c1337c9d421b994f9f8224934d12a17c71fc53af5c5de9753680ce47\"" Jul 11 00:21:44.575407 systemd[1]: Started cri-containerd-b648fe85c1337c9d421b994f9f8224934d12a17c71fc53af5c5de9753680ce47.scope - libcontainer container b648fe85c1337c9d421b994f9f8224934d12a17c71fc53af5c5de9753680ce47. Jul 11 00:21:44.640180 containerd[1471]: time="2025-07-11T00:21:44.640023255Z" level=info msg="StartContainer for \"b648fe85c1337c9d421b994f9f8224934d12a17c71fc53af5c5de9753680ce47\" returns successfully" Jul 11 00:21:45.501030 kubelet[2568]: E0711 00:21:45.500951 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:45.596248 kubelet[2568]: E0711 00:21:45.596184 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:45.904561 kubelet[2568]: I0711 00:21:45.904357 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-546c679459-wgq9f" podStartSLOduration=3.674365561 podStartE2EDuration="19.904335933s" podCreationTimestamp="2025-07-11 00:21:26 +0000 UTC" firstStartedPulling="2025-07-11 00:21:28.258212277 +0000 UTC m=+53.898422805" lastFinishedPulling="2025-07-11 00:21:44.488182649 +0000 UTC m=+70.128393177" observedRunningTime="2025-07-11 00:21:45.903757741 +0000 UTC m=+71.543968289" watchObservedRunningTime="2025-07-11 00:21:45.904335933 +0000 UTC m=+71.544546481" Jul 11 00:21:46.599879 kubelet[2568]: E0711 00:21:46.599822 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:47.501258 kubelet[2568]: E0711 00:21:47.500745 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:47.605202 kubelet[2568]: E0711 00:21:47.605116 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:48.501618 kubelet[2568]: E0711 00:21:48.501552 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:49.500904 kubelet[2568]: E0711 00:21:49.500817 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:51.025375 containerd[1471]: time="2025-07-11T00:21:51.025300282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:51.027662 containerd[1471]: time="2025-07-11T00:21:51.027566272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 11 00:21:51.035004 containerd[1471]: time="2025-07-11T00:21:51.034930092Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:51.042055 containerd[1471]: time="2025-07-11T00:21:51.041956835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:21:51.043038 containerd[1471]: time="2025-07-11T00:21:51.042954898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.554453483s" Jul 11 00:21:51.043038 containerd[1471]: time="2025-07-11T00:21:51.043024937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 11 00:21:51.062204 containerd[1471]: time="2025-07-11T00:21:51.062067187Z" level=info msg="CreateContainer within sandbox \"c7f18d106c974de5a33e481be0fce18a32a4be1279dc514c808caaeb4e1a4f2e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:21:51.097043 containerd[1471]: time="2025-07-11T00:21:51.096887292Z" level=info msg="CreateContainer within sandbox \"c7f18d106c974de5a33e481be0fce18a32a4be1279dc514c808caaeb4e1a4f2e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5971961f910b5303f7c05b51d4d3eb1f84d26e7f89f4eb4a66508a5fd49555e0\"" Jul 11 00:21:51.097765 containerd[1471]: time="2025-07-11T00:21:51.097695851Z" level=info msg="StartContainer for \"5971961f910b5303f7c05b51d4d3eb1f84d26e7f89f4eb4a66508a5fd49555e0\"" Jul 11 00:21:51.136116 systemd[1]: run-containerd-runc-k8s.io-5971961f910b5303f7c05b51d4d3eb1f84d26e7f89f4eb4a66508a5fd49555e0-runc.45Dvns.mount: Deactivated successfully. Jul 11 00:21:51.148512 systemd[1]: Started cri-containerd-5971961f910b5303f7c05b51d4d3eb1f84d26e7f89f4eb4a66508a5fd49555e0.scope - libcontainer container 5971961f910b5303f7c05b51d4d3eb1f84d26e7f89f4eb4a66508a5fd49555e0. Jul 11 00:21:51.197544 containerd[1471]: time="2025-07-11T00:21:51.197456826Z" level=info msg="StartContainer for \"5971961f910b5303f7c05b51d4d3eb1f84d26e7f89f4eb4a66508a5fd49555e0\" returns successfully" Jul 11 00:21:51.501220 kubelet[2568]: E0711 00:21:51.501111 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:53.501176 kubelet[2568]: E0711 00:21:53.500982 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:54.928505 systemd[1]: cri-containerd-5971961f910b5303f7c05b51d4d3eb1f84d26e7f89f4eb4a66508a5fd49555e0.scope: Deactivated successfully. Jul 11 00:21:54.954567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5971961f910b5303f7c05b51d4d3eb1f84d26e7f89f4eb4a66508a5fd49555e0-rootfs.mount: Deactivated successfully. Jul 11 00:21:55.064112 kubelet[2568]: I0711 00:21:55.064045 2568 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:21:55.148927 containerd[1471]: time="2025-07-11T00:21:55.148823120Z" level=info msg="shim disconnected" id=5971961f910b5303f7c05b51d4d3eb1f84d26e7f89f4eb4a66508a5fd49555e0 namespace=k8s.io Jul 11 00:21:55.148927 containerd[1471]: time="2025-07-11T00:21:55.148919249Z" level=warning msg="cleaning up after shim disconnected" id=5971961f910b5303f7c05b51d4d3eb1f84d26e7f89f4eb4a66508a5fd49555e0 namespace=k8s.io Jul 11 00:21:55.148927 containerd[1471]: time="2025-07-11T00:21:55.148936754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:21:55.204672 containerd[1471]: time="2025-07-11T00:21:55.204607106Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:21:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 11 00:21:55.288930 kubelet[2568]: I0711 00:21:55.288874 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2ld\" (UniqueName: \"kubernetes.io/projected/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-kube-api-access-9t2ld\") pod \"whisker-6994867c74-8bhtg\" (UID: \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\") " pod="calico-system/whisker-6994867c74-8bhtg" Jul 11 00:21:55.288930 kubelet[2568]: I0711 00:21:55.288928 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-whisker-backend-key-pair\") pod \"whisker-6994867c74-8bhtg\" (UID: \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\") " pod="calico-system/whisker-6994867c74-8bhtg" Jul 11 00:21:55.289138 kubelet[2568]: I0711 00:21:55.288949 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-whisker-ca-bundle\") pod \"whisker-6994867c74-8bhtg\" (UID: \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\") " pod="calico-system/whisker-6994867c74-8bhtg" Jul 11 00:21:55.293781 systemd[1]: Created slice kubepods-besteffort-podd89cb119_c161_4cbc_8fe8_fe4dbab872bf.slice - libcontainer container kubepods-besteffort-podd89cb119_c161_4cbc_8fe8_fe4dbab872bf.slice. Jul 11 00:21:55.351866 systemd[1]: Created slice kubepods-besteffort-pod86e94e6b_c0ad_463c_abf5_b6899adb9e4c.slice - libcontainer container kubepods-besteffort-pod86e94e6b_c0ad_463c_abf5_b6899adb9e4c.slice. Jul 11 00:21:55.374283 systemd[1]: Created slice kubepods-besteffort-pod11afdd52_9586_41ad_b277_069a0e6d90ba.slice - libcontainer container kubepods-besteffort-pod11afdd52_9586_41ad_b277_069a0e6d90ba.slice. Jul 11 00:21:55.380519 systemd[1]: Created slice kubepods-burstable-pod4fb87ad9_16fb_494a_87eb_605af4502d26.slice - libcontainer container kubepods-burstable-pod4fb87ad9_16fb_494a_87eb_605af4502d26.slice. Jul 11 00:21:55.389178 kubelet[2568]: I0711 00:21:55.389141 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/11afdd52-9586-41ad-b277-069a0e6d90ba-calico-apiserver-certs\") pod \"calico-apiserver-6fbf9d5d8f-zclpr\" (UID: \"11afdd52-9586-41ad-b277-069a0e6d90ba\") " pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-zclpr" Jul 11 00:21:55.390393 kubelet[2568]: I0711 00:21:55.389568 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ctcs\" (UniqueName: \"kubernetes.io/projected/85a64a17-b3e6-422f-9756-cb2f80a1643b-kube-api-access-5ctcs\") pod \"coredns-674b8bbfcf-hsp4s\" (UID: \"85a64a17-b3e6-422f-9756-cb2f80a1643b\") " pod="kube-system/coredns-674b8bbfcf-hsp4s" Jul 11 00:21:55.390393 kubelet[2568]: I0711 00:21:55.389601 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jthwz\" (UniqueName: \"kubernetes.io/projected/11afdd52-9586-41ad-b277-069a0e6d90ba-kube-api-access-jthwz\") pod \"calico-apiserver-6fbf9d5d8f-zclpr\" (UID: \"11afdd52-9586-41ad-b277-069a0e6d90ba\") " pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-zclpr" Jul 11 00:21:55.390393 kubelet[2568]: I0711 00:21:55.389634 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fb87ad9-16fb-494a-87eb-605af4502d26-config-volume\") pod \"coredns-674b8bbfcf-zhf8q\" (UID: \"4fb87ad9-16fb-494a-87eb-605af4502d26\") " pod="kube-system/coredns-674b8bbfcf-zhf8q" Jul 11 00:21:55.390393 kubelet[2568]: I0711 00:21:55.389656 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85a64a17-b3e6-422f-9756-cb2f80a1643b-config-volume\") pod \"coredns-674b8bbfcf-hsp4s\" (UID: \"85a64a17-b3e6-422f-9756-cb2f80a1643b\") " pod="kube-system/coredns-674b8bbfcf-hsp4s" Jul 11 00:21:55.390393 kubelet[2568]: I0711 00:21:55.389688 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86e94e6b-c0ad-463c-abf5-b6899adb9e4c-tigera-ca-bundle\") pod \"calico-kube-controllers-7ddcd977db-n58hn\" (UID: \"86e94e6b-c0ad-463c-abf5-b6899adb9e4c\") " pod="calico-system/calico-kube-controllers-7ddcd977db-n58hn" Jul 11 00:21:55.390556 kubelet[2568]: I0711 00:21:55.389708 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfhv8\" (UniqueName: \"kubernetes.io/projected/4fb87ad9-16fb-494a-87eb-605af4502d26-kube-api-access-cfhv8\") pod \"coredns-674b8bbfcf-zhf8q\" (UID: \"4fb87ad9-16fb-494a-87eb-605af4502d26\") " pod="kube-system/coredns-674b8bbfcf-zhf8q" Jul 11 00:21:55.390556 kubelet[2568]: I0711 00:21:55.389752 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jksfg\" (UniqueName: \"kubernetes.io/projected/86e94e6b-c0ad-463c-abf5-b6899adb9e4c-kube-api-access-jksfg\") pod \"calico-kube-controllers-7ddcd977db-n58hn\" (UID: \"86e94e6b-c0ad-463c-abf5-b6899adb9e4c\") " pod="calico-system/calico-kube-controllers-7ddcd977db-n58hn" Jul 11 00:21:55.400408 systemd[1]: Created slice kubepods-burstable-pod85a64a17_b3e6_422f_9756_cb2f80a1643b.slice - libcontainer container kubepods-burstable-pod85a64a17_b3e6_422f_9756_cb2f80a1643b.slice. Jul 11 00:21:55.432288 systemd[1]: Created slice kubepods-besteffort-pod1285ec7c_afc4_4f44_b914_280d299b3f6e.slice - libcontainer container kubepods-besteffort-pod1285ec7c_afc4_4f44_b914_280d299b3f6e.slice. Jul 11 00:21:55.443177 systemd[1]: Created slice kubepods-besteffort-podb897360e_69c8_4b60_abf3_671418db329a.slice - libcontainer container kubepods-besteffort-podb897360e_69c8_4b60_abf3_671418db329a.slice. Jul 11 00:21:55.491140 kubelet[2568]: I0711 00:21:55.490951 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b897360e-69c8-4b60-abf3-671418db329a-config\") pod \"goldmane-768f4c5c69-nxg64\" (UID: \"b897360e-69c8-4b60-abf3-671418db329a\") " pod="calico-system/goldmane-768f4c5c69-nxg64" Jul 11 00:21:55.491318 kubelet[2568]: I0711 00:21:55.491146 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b897360e-69c8-4b60-abf3-671418db329a-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-nxg64\" (UID: \"b897360e-69c8-4b60-abf3-671418db329a\") " pod="calico-system/goldmane-768f4c5c69-nxg64" Jul 11 00:21:55.491318 kubelet[2568]: I0711 00:21:55.491177 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdmxc\" (UniqueName: \"kubernetes.io/projected/b897360e-69c8-4b60-abf3-671418db329a-kube-api-access-wdmxc\") pod \"goldmane-768f4c5c69-nxg64\" (UID: \"b897360e-69c8-4b60-abf3-671418db329a\") " pod="calico-system/goldmane-768f4c5c69-nxg64" Jul 11 00:21:55.491318 kubelet[2568]: I0711 00:21:55.491223 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b897360e-69c8-4b60-abf3-671418db329a-goldmane-key-pair\") pod \"goldmane-768f4c5c69-nxg64\" (UID: \"b897360e-69c8-4b60-abf3-671418db329a\") " pod="calico-system/goldmane-768f4c5c69-nxg64" Jul 11 00:21:55.491318 kubelet[2568]: I0711 00:21:55.491243 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1285ec7c-afc4-4f44-b914-280d299b3f6e-calico-apiserver-certs\") pod \"calico-apiserver-6fbf9d5d8f-67gld\" (UID: \"1285ec7c-afc4-4f44-b914-280d299b3f6e\") " pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-67gld" Jul 11 00:21:55.491318 kubelet[2568]: I0711 00:21:55.491264 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps5f9\" (UniqueName: \"kubernetes.io/projected/1285ec7c-afc4-4f44-b914-280d299b3f6e-kube-api-access-ps5f9\") pod \"calico-apiserver-6fbf9d5d8f-67gld\" (UID: \"1285ec7c-afc4-4f44-b914-280d299b3f6e\") " pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-67gld" Jul 11 00:21:55.513199 systemd[1]: Created slice kubepods-besteffort-podef0f4240_50c1_431a_b911_54802b65a3ca.slice - libcontainer container kubepods-besteffort-podef0f4240_50c1_431a_b911_54802b65a3ca.slice. Jul 11 00:21:55.530573 containerd[1471]: time="2025-07-11T00:21:55.527655888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24rnq,Uid:ef0f4240-50c1-431a-b911-54802b65a3ca,Namespace:calico-system,Attempt:0,}" Jul 11 00:21:55.598521 containerd[1471]: time="2025-07-11T00:21:55.598309496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6994867c74-8bhtg,Uid:d89cb119-c161-4cbc-8fe8-fe4dbab872bf,Namespace:calico-system,Attempt:0,}" Jul 11 00:21:55.630762 containerd[1471]: time="2025-07-11T00:21:55.629864976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:21:55.658264 containerd[1471]: time="2025-07-11T00:21:55.658215035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ddcd977db-n58hn,Uid:86e94e6b-c0ad-463c-abf5-b6899adb9e4c,Namespace:calico-system,Attempt:0,}" Jul 11 00:21:55.679558 containerd[1471]: time="2025-07-11T00:21:55.679497281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fbf9d5d8f-zclpr,Uid:11afdd52-9586-41ad-b277-069a0e6d90ba,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:21:55.685114 kubelet[2568]: E0711 00:21:55.685041 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:55.686029 containerd[1471]: time="2025-07-11T00:21:55.685971617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zhf8q,Uid:4fb87ad9-16fb-494a-87eb-605af4502d26,Namespace:kube-system,Attempt:0,}" Jul 11 00:21:55.727634 kubelet[2568]: E0711 00:21:55.726392 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:55.727863 containerd[1471]: time="2025-07-11T00:21:55.727199849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hsp4s,Uid:85a64a17-b3e6-422f-9756-cb2f80a1643b,Namespace:kube-system,Attempt:0,}" Jul 11 00:21:55.738510 containerd[1471]: time="2025-07-11T00:21:55.738440026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fbf9d5d8f-67gld,Uid:1285ec7c-afc4-4f44-b914-280d299b3f6e,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:21:55.748456 containerd[1471]: time="2025-07-11T00:21:55.747963113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nxg64,Uid:b897360e-69c8-4b60-abf3-671418db329a,Namespace:calico-system,Attempt:0,}" Jul 11 00:21:55.935405 containerd[1471]: time="2025-07-11T00:21:55.935335893Z" level=error msg="Failed to destroy network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.936153 containerd[1471]: time="2025-07-11T00:21:55.936110178Z" level=error msg="Failed to destroy network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.939788 containerd[1471]: time="2025-07-11T00:21:55.939708893Z" level=error msg="encountered an error cleaning up failed sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.940004 containerd[1471]: time="2025-07-11T00:21:55.939818348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24rnq,Uid:ef0f4240-50c1-431a-b911-54802b65a3ca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.940048 containerd[1471]: time="2025-07-11T00:21:55.940002250Z" level=error msg="encountered an error cleaning up failed sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.940258 containerd[1471]: time="2025-07-11T00:21:55.940056066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ddcd977db-n58hn,Uid:86e94e6b-c0ad-463c-abf5-b6899adb9e4c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.941365 kubelet[2568]: E0711 00:21:55.941280 2568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.942101 kubelet[2568]: E0711 00:21:55.941408 2568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ddcd977db-n58hn" Jul 11 00:21:55.942101 kubelet[2568]: E0711 00:21:55.941442 2568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ddcd977db-n58hn" Jul 11 00:21:55.942101 kubelet[2568]: E0711 00:21:55.941509 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7ddcd977db-n58hn_calico-system(86e94e6b-c0ad-463c-abf5-b6899adb9e4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7ddcd977db-n58hn_calico-system(86e94e6b-c0ad-463c-abf5-b6899adb9e4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ddcd977db-n58hn" podUID="86e94e6b-c0ad-463c-abf5-b6899adb9e4c" Jul 11 00:21:55.942273 containerd[1471]: time="2025-07-11T00:21:55.941925917Z" level=error msg="Failed to destroy network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.942309 kubelet[2568]: E0711 00:21:55.941412 2568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.942309 kubelet[2568]: E0711 00:21:55.941604 2568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24rnq" Jul 11 00:21:55.942309 kubelet[2568]: E0711 00:21:55.941636 2568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-24rnq" Jul 11 00:21:55.942428 kubelet[2568]: E0711 00:21:55.941714 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-24rnq_calico-system(ef0f4240-50c1-431a-b911-54802b65a3ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-24rnq_calico-system(ef0f4240-50c1-431a-b911-54802b65a3ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:55.942920 containerd[1471]: time="2025-07-11T00:21:55.942786731Z" level=error msg="encountered an error cleaning up failed sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.942920 containerd[1471]: time="2025-07-11T00:21:55.942868032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6994867c74-8bhtg,Uid:d89cb119-c161-4cbc-8fe8-fe4dbab872bf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.943201 kubelet[2568]: E0711 00:21:55.943142 2568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:55.943201 kubelet[2568]: E0711 00:21:55.943189 2568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6994867c74-8bhtg" Jul 11 00:21:55.943379 kubelet[2568]: E0711 00:21:55.943214 2568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6994867c74-8bhtg" Jul 11 00:21:55.943379 kubelet[2568]: E0711 00:21:55.943269 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6994867c74-8bhtg_calico-system(d89cb119-c161-4cbc-8fe8-fe4dbab872bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6994867c74-8bhtg_calico-system(d89cb119-c161-4cbc-8fe8-fe4dbab872bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6994867c74-8bhtg" podUID="d89cb119-c161-4cbc-8fe8-fe4dbab872bf" Jul 11 00:21:56.072406 containerd[1471]: time="2025-07-11T00:21:56.072185164Z" level=error msg="Failed to destroy network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.075287 containerd[1471]: time="2025-07-11T00:21:56.073197714Z" level=error msg="encountered an error cleaning up failed sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.075287 containerd[1471]: time="2025-07-11T00:21:56.073431856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fbf9d5d8f-zclpr,Uid:11afdd52-9586-41ad-b277-069a0e6d90ba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.075497 kubelet[2568]: E0711 00:21:56.073848 2568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.075497 kubelet[2568]: E0711 00:21:56.073942 2568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-zclpr" Jul 11 00:21:56.075497 kubelet[2568]: E0711 00:21:56.074004 2568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-zclpr" Jul 11 00:21:56.075994 kubelet[2568]: E0711 00:21:56.074179 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fbf9d5d8f-zclpr_calico-apiserver(11afdd52-9586-41ad-b277-069a0e6d90ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fbf9d5d8f-zclpr_calico-apiserver(11afdd52-9586-41ad-b277-069a0e6d90ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-zclpr" podUID="11afdd52-9586-41ad-b277-069a0e6d90ba" Jul 11 00:21:56.078177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df-shm.mount: Deactivated successfully. Jul 11 00:21:56.087212 containerd[1471]: time="2025-07-11T00:21:56.087160234Z" level=error msg="Failed to destroy network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.087639 containerd[1471]: time="2025-07-11T00:21:56.087590269Z" level=error msg="encountered an error cleaning up failed sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.087699 containerd[1471]: time="2025-07-11T00:21:56.087650468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zhf8q,Uid:4fb87ad9-16fb-494a-87eb-605af4502d26,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.088249 kubelet[2568]: E0711 00:21:56.087936 2568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.088249 kubelet[2568]: E0711 00:21:56.088027 2568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zhf8q" Jul 11 00:21:56.088249 kubelet[2568]: E0711 00:21:56.088056 2568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zhf8q" Jul 11 00:21:56.090125 kubelet[2568]: E0711 00:21:56.088444 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zhf8q_kube-system(4fb87ad9-16fb-494a-87eb-605af4502d26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zhf8q_kube-system(4fb87ad9-16fb-494a-87eb-605af4502d26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zhf8q" podUID="4fb87ad9-16fb-494a-87eb-605af4502d26" Jul 11 00:21:56.091596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1-shm.mount: Deactivated successfully. Jul 11 00:21:56.095639 containerd[1471]: time="2025-07-11T00:21:56.095575222Z" level=error msg="Failed to destroy network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.097247 containerd[1471]: time="2025-07-11T00:21:56.096185874Z" level=error msg="encountered an error cleaning up failed sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.097247 containerd[1471]: time="2025-07-11T00:21:56.096600839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nxg64,Uid:b897360e-69c8-4b60-abf3-671418db329a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.097408 kubelet[2568]: E0711 00:21:56.097365 2568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.097560 kubelet[2568]: E0711 00:21:56.097531 2568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nxg64" Jul 11 00:21:56.097699 kubelet[2568]: E0711 00:21:56.097563 2568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-nxg64" Jul 11 00:21:56.097781 kubelet[2568]: E0711 00:21:56.097750 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-nxg64_calico-system(b897360e-69c8-4b60-abf3-671418db329a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-nxg64_calico-system(b897360e-69c8-4b60-abf3-671418db329a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-nxg64" podUID="b897360e-69c8-4b60-abf3-671418db329a" Jul 11 00:21:56.099108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733-shm.mount: Deactivated successfully. Jul 11 00:21:56.114687 containerd[1471]: time="2025-07-11T00:21:56.114571872Z" level=error msg="Failed to destroy network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.115390 containerd[1471]: time="2025-07-11T00:21:56.115345433Z" level=error msg="encountered an error cleaning up failed sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.115467 containerd[1471]: time="2025-07-11T00:21:56.115434819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hsp4s,Uid:85a64a17-b3e6-422f-9756-cb2f80a1643b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.115786 kubelet[2568]: E0711 00:21:56.115736 2568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.115864 kubelet[2568]: E0711 00:21:56.115809 2568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hsp4s" Jul 11 00:21:56.115864 kubelet[2568]: E0711 00:21:56.115834 2568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-hsp4s" Jul 11 00:21:56.115931 kubelet[2568]: E0711 00:21:56.115900 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hsp4s_kube-system(85a64a17-b3e6-422f-9756-cb2f80a1643b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hsp4s_kube-system(85a64a17-b3e6-422f-9756-cb2f80a1643b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hsp4s" podUID="85a64a17-b3e6-422f-9756-cb2f80a1643b" Jul 11 00:21:56.121886 containerd[1471]: time="2025-07-11T00:21:56.121783585Z" level=error msg="Failed to destroy network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.122441 containerd[1471]: time="2025-07-11T00:21:56.122387763Z" level=error msg="encountered an error cleaning up failed sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.122522 containerd[1471]: time="2025-07-11T00:21:56.122461658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fbf9d5d8f-67gld,Uid:1285ec7c-afc4-4f44-b914-280d299b3f6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.122745 kubelet[2568]: E0711 00:21:56.122705 2568 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.122801 kubelet[2568]: E0711 00:21:56.122763 2568 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-67gld" Jul 11 00:21:56.122801 kubelet[2568]: E0711 00:21:56.122789 2568 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-67gld" Jul 11 00:21:56.122882 kubelet[2568]: E0711 00:21:56.122844 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6fbf9d5d8f-67gld_calico-apiserver(1285ec7c-afc4-4f44-b914-280d299b3f6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6fbf9d5d8f-67gld_calico-apiserver(1285ec7c-afc4-4f44-b914-280d299b3f6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-67gld" podUID="1285ec7c-afc4-4f44-b914-280d299b3f6e" Jul 11 00:21:56.501562 kubelet[2568]: E0711 00:21:56.501508 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:21:56.631303 kubelet[2568]: I0711 00:21:56.631258 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:21:56.632179 containerd[1471]: time="2025-07-11T00:21:56.632108576Z" level=info msg="StopPodSandbox for \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\"" Jul 11 00:21:56.633806 kubelet[2568]: I0711 00:21:56.633774 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:21:56.634037 containerd[1471]: time="2025-07-11T00:21:56.633996609Z" level=info msg="Ensure that sandbox 55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1 in task-service has been cleanup successfully" Jul 11 00:21:56.643680 containerd[1471]: time="2025-07-11T00:21:56.643618301Z" level=info msg="StopPodSandbox for \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\"" Jul 11 00:21:56.643905 containerd[1471]: time="2025-07-11T00:21:56.643877621Z" level=info msg="Ensure that sandbox 0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9 in task-service has been cleanup successfully" Jul 11 00:21:56.643934 kubelet[2568]: I0711 00:21:56.643880 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:21:56.644509 containerd[1471]: time="2025-07-11T00:21:56.644476549Z" level=info msg="StopPodSandbox for \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\"" Jul 11 00:21:56.644732 containerd[1471]: time="2025-07-11T00:21:56.644682194Z" level=info msg="Ensure that sandbox 00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d in task-service has been cleanup successfully" Jul 11 00:21:56.645863 kubelet[2568]: I0711 00:21:56.645822 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:21:56.647138 containerd[1471]: time="2025-07-11T00:21:56.647113084Z" level=info msg="StopPodSandbox for \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\"" Jul 11 00:21:56.647265 containerd[1471]: time="2025-07-11T00:21:56.647245494Z" level=info msg="Ensure that sandbox 2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df in task-service has been cleanup successfully" Jul 11 00:21:56.655861 kubelet[2568]: I0711 00:21:56.655255 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:21:56.656279 containerd[1471]: time="2025-07-11T00:21:56.656247328Z" level=info msg="StopPodSandbox for \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\"" Jul 11 00:21:56.657533 containerd[1471]: time="2025-07-11T00:21:56.656620442Z" level=info msg="Ensure that sandbox 9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733 in task-service has been cleanup successfully" Jul 11 00:21:56.658056 kubelet[2568]: I0711 00:21:56.658017 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:21:56.658648 containerd[1471]: time="2025-07-11T00:21:56.658621797Z" level=info msg="StopPodSandbox for \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\"" Jul 11 00:21:56.659430 containerd[1471]: time="2025-07-11T00:21:56.659410539Z" level=info msg="Ensure that sandbox 8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba in task-service has been cleanup successfully" Jul 11 00:21:56.661650 kubelet[2568]: I0711 00:21:56.661620 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:21:56.665192 containerd[1471]: time="2025-07-11T00:21:56.665145968Z" level=info msg="StopPodSandbox for \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\"" Jul 11 00:21:56.665521 containerd[1471]: time="2025-07-11T00:21:56.665500555Z" level=info msg="Ensure that sandbox 93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c in task-service has been cleanup successfully" Jul 11 00:21:56.666646 kubelet[2568]: I0711 00:21:56.666615 2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:21:56.669322 containerd[1471]: time="2025-07-11T00:21:56.669286200Z" level=info msg="StopPodSandbox for \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\"" Jul 11 00:21:56.669513 containerd[1471]: time="2025-07-11T00:21:56.669488338Z" level=info msg="Ensure that sandbox 6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680 in task-service has been cleanup successfully" Jul 11 00:21:56.695316 containerd[1471]: time="2025-07-11T00:21:56.695254920Z" level=error msg="StopPodSandbox for \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\" failed" error="failed to destroy network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.695877 kubelet[2568]: E0711 00:21:56.695811 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:21:56.695975 kubelet[2568]: E0711 00:21:56.695904 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1"} Jul 11 00:21:56.696056 kubelet[2568]: E0711 00:21:56.695983 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fb87ad9-16fb-494a-87eb-605af4502d26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:21:56.696056 kubelet[2568]: E0711 00:21:56.696027 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fb87ad9-16fb-494a-87eb-605af4502d26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zhf8q" podUID="4fb87ad9-16fb-494a-87eb-605af4502d26" Jul 11 00:21:56.704138 containerd[1471]: time="2025-07-11T00:21:56.702898793Z" level=error msg="StopPodSandbox for \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\" failed" error="failed to destroy network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.704328 kubelet[2568]: E0711 00:21:56.703369 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:21:56.704328 kubelet[2568]: E0711 00:21:56.703426 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df"} Jul 11 00:21:56.704328 kubelet[2568]: E0711 00:21:56.703468 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11afdd52-9586-41ad-b277-069a0e6d90ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:21:56.704328 kubelet[2568]: E0711 00:21:56.703494 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11afdd52-9586-41ad-b277-069a0e6d90ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-zclpr" podUID="11afdd52-9586-41ad-b277-069a0e6d90ba" Jul 11 00:21:56.731355 containerd[1471]: time="2025-07-11T00:21:56.731273805Z" level=error msg="StopPodSandbox for \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\" failed" error="failed to destroy network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.732123 kubelet[2568]: E0711 00:21:56.731577 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:21:56.732123 kubelet[2568]: E0711 00:21:56.731633 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733"} Jul 11 00:21:56.732123 kubelet[2568]: E0711 00:21:56.731676 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b897360e-69c8-4b60-abf3-671418db329a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:21:56.732123 kubelet[2568]: E0711 00:21:56.731710 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b897360e-69c8-4b60-abf3-671418db329a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-nxg64" podUID="b897360e-69c8-4b60-abf3-671418db329a" Jul 11 00:21:56.734017 containerd[1471]: time="2025-07-11T00:21:56.733934288Z" level=error msg="StopPodSandbox for \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\" failed" error="failed to destroy network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.734608 kubelet[2568]: E0711 00:21:56.734275 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:21:56.734608 kubelet[2568]: E0711 00:21:56.734341 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680"} Jul 11 00:21:56.734608 kubelet[2568]: E0711 00:21:56.734379 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86e94e6b-c0ad-463c-abf5-b6899adb9e4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:21:56.734608 kubelet[2568]: E0711 00:21:56.734410 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86e94e6b-c0ad-463c-abf5-b6899adb9e4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ddcd977db-n58hn" podUID="86e94e6b-c0ad-463c-abf5-b6899adb9e4c" Jul 11 00:21:56.737401 containerd[1471]: time="2025-07-11T00:21:56.737342811Z" level=error msg="StopPodSandbox for \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\" failed" error="failed to destroy network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.737601 containerd[1471]: time="2025-07-11T00:21:56.737508427Z" level=error msg="StopPodSandbox for \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\" failed" error="failed to destroy network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.737734 kubelet[2568]: E0711 00:21:56.737688 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:21:56.737807 kubelet[2568]: E0711 00:21:56.737761 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d"} Jul 11 00:21:56.737833 kubelet[2568]: E0711 00:21:56.737811 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef0f4240-50c1-431a-b911-54802b65a3ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:21:56.737880 kubelet[2568]: E0711 00:21:56.737852 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef0f4240-50c1-431a-b911-54802b65a3ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:21:56.739018 kubelet[2568]: E0711 00:21:56.738816 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:21:56.739018 kubelet[2568]: E0711 00:21:56.738849 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9"} Jul 11 00:21:56.739018 kubelet[2568]: E0711 00:21:56.738877 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:21:56.739018 kubelet[2568]: E0711 00:21:56.738921 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6994867c74-8bhtg" podUID="d89cb119-c161-4cbc-8fe8-fe4dbab872bf" Jul 11 00:21:56.745133 containerd[1471]: time="2025-07-11T00:21:56.745049517Z" level=error msg="StopPodSandbox for \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\" failed" error="failed to destroy network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.745413 kubelet[2568]: E0711 00:21:56.745373 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:21:56.745473 kubelet[2568]: E0711 00:21:56.745438 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba"} Jul 11 00:21:56.745501 kubelet[2568]: E0711 00:21:56.745473 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1285ec7c-afc4-4f44-b914-280d299b3f6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:21:56.745565 kubelet[2568]: E0711 00:21:56.745496 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1285ec7c-afc4-4f44-b914-280d299b3f6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-67gld" podUID="1285ec7c-afc4-4f44-b914-280d299b3f6e" Jul 11 00:21:56.795257 containerd[1471]: time="2025-07-11T00:21:56.795054768Z" level=error msg="StopPodSandbox for \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\" failed" error="failed to destroy network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:21:56.797378 kubelet[2568]: E0711 00:21:56.795345 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:21:56.797472 kubelet[2568]: E0711 00:21:56.797399 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c"} Jul 11 00:21:56.797472 kubelet[2568]: E0711 00:21:56.797446 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85a64a17-b3e6-422f-9756-cb2f80a1643b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:21:56.797600 kubelet[2568]: E0711 00:21:56.797480 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85a64a17-b3e6-422f-9756-cb2f80a1643b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hsp4s" podUID="85a64a17-b3e6-422f-9756-cb2f80a1643b" Jul 11 00:21:56.955285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba-shm.mount: Deactivated successfully. Jul 11 00:21:56.955448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c-shm.mount: Deactivated successfully. Jul 11 00:22:05.107921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1064728572.mount: Deactivated successfully. Jul 11 00:22:07.502675 containerd[1471]: time="2025-07-11T00:22:07.502207097Z" level=info msg="StopPodSandbox for \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\"" Jul 11 00:22:07.502675 containerd[1471]: time="2025-07-11T00:22:07.502209862Z" level=info msg="StopPodSandbox for \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\"" Jul 11 00:22:07.582351 containerd[1471]: time="2025-07-11T00:22:07.582258216Z" level=error msg="StopPodSandbox for \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\" failed" error="failed to destroy network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:07.582679 kubelet[2568]: E0711 00:22:07.582592 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:07.583190 kubelet[2568]: E0711 00:22:07.582689 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9"} Jul 11 00:22:07.583190 kubelet[2568]: E0711 00:22:07.582743 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:07.583190 kubelet[2568]: E0711 00:22:07.582778 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6994867c74-8bhtg" podUID="d89cb119-c161-4cbc-8fe8-fe4dbab872bf" Jul 11 00:22:07.584621 containerd[1471]: time="2025-07-11T00:22:07.584517509Z" level=error msg="StopPodSandbox for \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\" failed" error="failed to destroy network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:07.584884 kubelet[2568]: E0711 00:22:07.584830 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:07.584940 kubelet[2568]: E0711 00:22:07.584902 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733"} Jul 11 00:22:07.584967 kubelet[2568]: E0711 00:22:07.584951 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b897360e-69c8-4b60-abf3-671418db329a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:07.585023 kubelet[2568]: E0711 00:22:07.584984 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b897360e-69c8-4b60-abf3-671418db329a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-nxg64" podUID="b897360e-69c8-4b60-abf3-671418db329a" Jul 11 00:22:08.502499 containerd[1471]: time="2025-07-11T00:22:08.502444000Z" level=info msg="StopPodSandbox for \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\"" Jul 11 00:22:08.503977 containerd[1471]: time="2025-07-11T00:22:08.503308784Z" level=info msg="StopPodSandbox for \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\"" Jul 11 00:22:08.503977 containerd[1471]: time="2025-07-11T00:22:08.503659647Z" level=info msg="StopPodSandbox for \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\"" Jul 11 00:22:08.545442 containerd[1471]: time="2025-07-11T00:22:08.545377769Z" level=error msg="StopPodSandbox for \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\" failed" error="failed to destroy network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.546245 kubelet[2568]: E0711 00:22:08.546168 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:08.546336 kubelet[2568]: E0711 00:22:08.546263 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba"} Jul 11 00:22:08.546336 kubelet[2568]: E0711 00:22:08.546309 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1285ec7c-afc4-4f44-b914-280d299b3f6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:08.546472 kubelet[2568]: E0711 00:22:08.546345 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1285ec7c-afc4-4f44-b914-280d299b3f6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-67gld" podUID="1285ec7c-afc4-4f44-b914-280d299b3f6e" Jul 11 00:22:08.548152 containerd[1471]: time="2025-07-11T00:22:08.548062226Z" level=error msg="StopPodSandbox for \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\" failed" error="failed to destroy network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.548558 kubelet[2568]: E0711 00:22:08.548522 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:08.548640 kubelet[2568]: E0711 00:22:08.548563 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df"} Jul 11 00:22:08.548640 kubelet[2568]: E0711 00:22:08.548596 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11afdd52-9586-41ad-b277-069a0e6d90ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:08.548640 kubelet[2568]: E0711 00:22:08.548632 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11afdd52-9586-41ad-b277-069a0e6d90ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-zclpr" podUID="11afdd52-9586-41ad-b277-069a0e6d90ba" Jul 11 00:22:08.552512 containerd[1471]: time="2025-07-11T00:22:08.552442695Z" level=error msg="StopPodSandbox for \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\" failed" error="failed to destroy network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:08.552705 kubelet[2568]: E0711 00:22:08.552648 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:08.552758 kubelet[2568]: E0711 00:22:08.552718 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680"} Jul 11 00:22:08.552787 kubelet[2568]: E0711 00:22:08.552756 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86e94e6b-c0ad-463c-abf5-b6899adb9e4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:08.552853 kubelet[2568]: E0711 00:22:08.552782 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86e94e6b-c0ad-463c-abf5-b6899adb9e4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ddcd977db-n58hn" podUID="86e94e6b-c0ad-463c-abf5-b6899adb9e4c" Jul 11 00:22:08.590496 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:54174.service - OpenSSH per-connection server daemon (10.0.0.1:54174). Jul 11 00:22:08.792146 sshd[3858]: Accepted publickey for core from 10.0.0.1 port 54174 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:08.794362 sshd[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:08.805706 systemd-logind[1453]: New session 10 of user core. Jul 11 00:22:08.816384 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:22:09.501741 containerd[1471]: time="2025-07-11T00:22:09.501689070Z" level=info msg="StopPodSandbox for \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\"" Jul 11 00:22:09.501924 containerd[1471]: time="2025-07-11T00:22:09.501689060Z" level=info msg="StopPodSandbox for \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\"" Jul 11 00:22:09.530944 containerd[1471]: time="2025-07-11T00:22:09.530879690Z" level=error msg="StopPodSandbox for \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\" failed" error="failed to destroy network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:09.531591 kubelet[2568]: E0711 00:22:09.531168 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:09.531591 kubelet[2568]: E0711 00:22:09.531244 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d"} Jul 11 00:22:09.531591 kubelet[2568]: E0711 00:22:09.531290 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef0f4240-50c1-431a-b911-54802b65a3ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:09.531591 kubelet[2568]: E0711 00:22:09.531320 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef0f4240-50c1-431a-b911-54802b65a3ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-24rnq" podUID="ef0f4240-50c1-431a-b911-54802b65a3ca" Jul 11 00:22:09.532965 containerd[1471]: time="2025-07-11T00:22:09.532886987Z" level=error msg="StopPodSandbox for \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\" failed" error="failed to destroy network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:09.533196 kubelet[2568]: E0711 00:22:09.533151 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:09.533280 kubelet[2568]: E0711 00:22:09.533202 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c"} Jul 11 00:22:09.533280 kubelet[2568]: E0711 00:22:09.533233 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85a64a17-b3e6-422f-9756-cb2f80a1643b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:09.533385 kubelet[2568]: E0711 00:22:09.533259 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85a64a17-b3e6-422f-9756-cb2f80a1643b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-hsp4s" podUID="85a64a17-b3e6-422f-9756-cb2f80a1643b" Jul 11 00:22:09.609448 sshd[3858]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:09.614817 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:54174.service: Deactivated successfully. Jul 11 00:22:09.617285 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:22:09.618796 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:22:09.620262 systemd-logind[1453]: Removed session 10. Jul 11 00:22:10.500999 kubelet[2568]: E0711 00:22:10.500921 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:10.501730 containerd[1471]: time="2025-07-11T00:22:10.501622822Z" level=info msg="StopPodSandbox for \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\"" Jul 11 00:22:10.533142 containerd[1471]: time="2025-07-11T00:22:10.532898248Z" level=error msg="StopPodSandbox for \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\" failed" error="failed to destroy network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:22:10.533673 kubelet[2568]: E0711 00:22:10.533285 2568 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:10.533673 kubelet[2568]: E0711 00:22:10.533366 2568 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1"} Jul 11 00:22:10.533673 kubelet[2568]: E0711 00:22:10.533412 2568 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fb87ad9-16fb-494a-87eb-605af4502d26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:22:10.533673 kubelet[2568]: E0711 00:22:10.533454 2568 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fb87ad9-16fb-494a-87eb-605af4502d26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zhf8q" podUID="4fb87ad9-16fb-494a-87eb-605af4502d26" Jul 11 00:22:10.586812 containerd[1471]: time="2025-07-11T00:22:10.586539468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:10.731499 containerd[1471]: time="2025-07-11T00:22:10.731391043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 11 00:22:10.781295 containerd[1471]: time="2025-07-11T00:22:10.781106759Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:10.821993 containerd[1471]: time="2025-07-11T00:22:10.821268897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 15.191344204s" Jul 11 00:22:10.822250 containerd[1471]: time="2025-07-11T00:22:10.822030398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 11 00:22:10.842054 containerd[1471]: time="2025-07-11T00:22:10.841978639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:10.889554 containerd[1471]: time="2025-07-11T00:22:10.889477865Z" level=info msg="CreateContainer within sandbox \"c7f18d106c974de5a33e481be0fce18a32a4be1279dc514c808caaeb4e1a4f2e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:22:11.618770 containerd[1471]: time="2025-07-11T00:22:11.618667780Z" level=info msg="CreateContainer within sandbox \"c7f18d106c974de5a33e481be0fce18a32a4be1279dc514c808caaeb4e1a4f2e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5cb4a63d0d270453e39cbd533bc06d97fef432363429ce02095cfbca74fd9aea\"" Jul 11 00:22:11.620001 containerd[1471]: time="2025-07-11T00:22:11.619960502Z" level=info msg="StartContainer for \"5cb4a63d0d270453e39cbd533bc06d97fef432363429ce02095cfbca74fd9aea\"" Jul 11 00:22:11.728729 systemd[1]: Started cri-containerd-5cb4a63d0d270453e39cbd533bc06d97fef432363429ce02095cfbca74fd9aea.scope - libcontainer container 5cb4a63d0d270453e39cbd533bc06d97fef432363429ce02095cfbca74fd9aea. Jul 11 00:22:11.864919 containerd[1471]: time="2025-07-11T00:22:11.864848293Z" level=info msg="StartContainer for \"5cb4a63d0d270453e39cbd533bc06d97fef432363429ce02095cfbca74fd9aea\" returns successfully" Jul 11 00:22:11.968239 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:22:11.974823 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:22:12.500471 containerd[1471]: time="2025-07-11T00:22:12.499962224Z" level=info msg="StopPodSandbox for \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\"" Jul 11 00:22:13.464136 kubelet[2568]: I0711 00:22:13.462429 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rjx2b" podStartSLOduration=4.897031608 podStartE2EDuration="47.462398289s" podCreationTimestamp="2025-07-11 00:21:26 +0000 UTC" firstStartedPulling="2025-07-11 00:21:28.258035454 +0000 UTC m=+53.898245992" lastFinishedPulling="2025-07-11 00:22:10.823402145 +0000 UTC m=+96.463612673" observedRunningTime="2025-07-11 00:22:12.981984636 +0000 UTC m=+98.622195184" watchObservedRunningTime="2025-07-11 00:22:13.462398289 +0000 UTC m=+99.102608818" Jul 11 00:22:14.621180 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:54182.service - OpenSSH per-connection server daemon (10.0.0.1:54182). Jul 11 00:22:14.740150 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 54182 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:14.778478 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:14.784548 systemd-logind[1453]: New session 11 of user core. Jul 11 00:22:14.794351 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:22:15.140733 sshd[4070]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:15.145672 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:22:15.151228 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:54182.service: Deactivated successfully. Jul 11 00:22:15.154368 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:22:15.156852 systemd-logind[1453]: Removed session 11. Jul 11 00:22:15.378177 kernel: bpftool[4217]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 11 00:22:15.740390 systemd-networkd[1394]: vxlan.calico: Link UP Jul 11 00:22:15.740401 systemd-networkd[1394]: vxlan.calico: Gained carrier Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:13.462 [INFO][3998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:13.463 [INFO][3998] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" iface="eth0" netns="/var/run/netns/cni-0105a7cc-cdab-951c-c369-def589d4d5e7" Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:13.463 [INFO][3998] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" iface="eth0" netns="/var/run/netns/cni-0105a7cc-cdab-951c-c369-def589d4d5e7" Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:13.464 [INFO][3998] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" iface="eth0" netns="/var/run/netns/cni-0105a7cc-cdab-951c-c369-def589d4d5e7" Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:13.464 [INFO][3998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:13.464 [INFO][3998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:15.671 [INFO][4030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" HandleID="k8s-pod-network.0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Workload="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:15.712 [INFO][4030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:15.718 [INFO][4030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:15.811 [WARNING][4030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" HandleID="k8s-pod-network.0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Workload="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:15.811 [INFO][4030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" HandleID="k8s-pod-network.0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Workload="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:15.830 [INFO][4030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:15.836761 containerd[1471]: 2025-07-11 00:22:15.833 [INFO][3998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:15.838091 containerd[1471]: time="2025-07-11T00:22:15.836958196Z" level=info msg="TearDown network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\" successfully" Jul 11 00:22:15.838091 containerd[1471]: time="2025-07-11T00:22:15.836992462Z" level=info msg="StopPodSandbox for \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\" returns successfully" Jul 11 00:22:15.841751 systemd[1]: run-netns-cni\x2d0105a7cc\x2dcdab\x2d951c\x2dc369\x2ddef589d4d5e7.mount: Deactivated successfully. Jul 11 00:22:15.943842 kubelet[2568]: I0711 00:22:15.943736 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-whisker-ca-bundle\") pod \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\" (UID: \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\") " Jul 11 00:22:15.943842 kubelet[2568]: I0711 00:22:15.943803 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t2ld\" (UniqueName: \"kubernetes.io/projected/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-kube-api-access-9t2ld\") pod \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\" (UID: \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\") " Jul 11 00:22:15.943842 kubelet[2568]: I0711 00:22:15.943854 2568 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-whisker-backend-key-pair\") pod \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\" (UID: \"d89cb119-c161-4cbc-8fe8-fe4dbab872bf\") " Jul 11 00:22:15.944949 kubelet[2568]: I0711 00:22:15.944461 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d89cb119-c161-4cbc-8fe8-fe4dbab872bf" (UID: "d89cb119-c161-4cbc-8fe8-fe4dbab872bf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:22:15.950336 kubelet[2568]: I0711 00:22:15.950261 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-kube-api-access-9t2ld" (OuterVolumeSpecName: "kube-api-access-9t2ld") pod "d89cb119-c161-4cbc-8fe8-fe4dbab872bf" (UID: "d89cb119-c161-4cbc-8fe8-fe4dbab872bf"). InnerVolumeSpecName "kube-api-access-9t2ld". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:22:15.950629 kubelet[2568]: I0711 00:22:15.950562 2568 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d89cb119-c161-4cbc-8fe8-fe4dbab872bf" (UID: "d89cb119-c161-4cbc-8fe8-fe4dbab872bf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:22:15.951705 systemd[1]: var-lib-kubelet-pods-d89cb119\x2dc161\x2d4cbc\x2d8fe8\x2dfe4dbab872bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9t2ld.mount: Deactivated successfully. Jul 11 00:22:15.951853 systemd[1]: var-lib-kubelet-pods-d89cb119\x2dc161\x2d4cbc\x2d8fe8\x2dfe4dbab872bf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:22:16.044581 kubelet[2568]: I0711 00:22:16.044450 2568 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:22:16.044581 kubelet[2568]: I0711 00:22:16.044492 2568 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:22:16.044581 kubelet[2568]: I0711 00:22:16.044506 2568 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9t2ld\" (UniqueName: \"kubernetes.io/projected/d89cb119-c161-4cbc-8fe8-fe4dbab872bf-kube-api-access-9t2ld\") on node \"localhost\" DevicePath \"\"" Jul 11 00:22:16.510753 systemd[1]: Removed slice kubepods-besteffort-podd89cb119_c161_4cbc_8fe8_fe4dbab872bf.slice - libcontainer container kubepods-besteffort-podd89cb119_c161_4cbc_8fe8_fe4dbab872bf.slice. Jul 11 00:22:17.617399 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Jul 11 00:22:18.503529 kubelet[2568]: I0711 00:22:18.503468 2568 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89cb119-c161-4cbc-8fe8-fe4dbab872bf" path="/var/lib/kubelet/pods/d89cb119-c161-4cbc-8fe8-fe4dbab872bf/volumes" Jul 11 00:22:19.501639 containerd[1471]: time="2025-07-11T00:22:19.501250740Z" level=info msg="StopPodSandbox for \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\"" Jul 11 00:22:19.777641 systemd[1]: Created slice kubepods-besteffort-poddd2a0a29_d3d9_441d_9a96_7763124a810f.slice - libcontainer container kubepods-besteffort-poddd2a0a29_d3d9_441d_9a96_7763124a810f.slice. Jul 11 00:22:19.868363 kubelet[2568]: I0711 00:22:19.868256 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd2a0a29-d3d9-441d-9a96-7763124a810f-whisker-ca-bundle\") pod \"whisker-6d7b457878-lc4l4\" (UID: \"dd2a0a29-d3d9-441d-9a96-7763124a810f\") " pod="calico-system/whisker-6d7b457878-lc4l4" Jul 11 00:22:19.868363 kubelet[2568]: I0711 00:22:19.868319 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znkcn\" (UniqueName: \"kubernetes.io/projected/dd2a0a29-d3d9-441d-9a96-7763124a810f-kube-api-access-znkcn\") pod \"whisker-6d7b457878-lc4l4\" (UID: \"dd2a0a29-d3d9-441d-9a96-7763124a810f\") " pod="calico-system/whisker-6d7b457878-lc4l4" Jul 11 00:22:19.868363 kubelet[2568]: I0711 00:22:19.868347 2568 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd2a0a29-d3d9-441d-9a96-7763124a810f-whisker-backend-key-pair\") pod \"whisker-6d7b457878-lc4l4\" (UID: \"dd2a0a29-d3d9-441d-9a96-7763124a810f\") " pod="calico-system/whisker-6d7b457878-lc4l4" Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.755 [INFO][4307] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.756 [INFO][4307] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" iface="eth0" netns="/var/run/netns/cni-bf63d112-3f28-f9dd-95ed-fe1f07381f08" Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.759 [INFO][4307] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" iface="eth0" netns="/var/run/netns/cni-bf63d112-3f28-f9dd-95ed-fe1f07381f08" Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.759 [INFO][4307] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" iface="eth0" netns="/var/run/netns/cni-bf63d112-3f28-f9dd-95ed-fe1f07381f08" Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.759 [INFO][4307] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.759 [INFO][4307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.802 [INFO][4316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" HandleID="k8s-pod-network.9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.803 [INFO][4316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.803 [INFO][4316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.813 [WARNING][4316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" HandleID="k8s-pod-network.9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.813 [INFO][4316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" HandleID="k8s-pod-network.9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.926 [INFO][4316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:19.935952 containerd[1471]: 2025-07-11 00:22:19.931 [INFO][4307] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:19.936920 containerd[1471]: time="2025-07-11T00:22:19.936386325Z" level=info msg="TearDown network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\" successfully" Jul 11 00:22:19.936920 containerd[1471]: time="2025-07-11T00:22:19.936426704Z" level=info msg="StopPodSandbox for \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\" returns successfully" Jul 11 00:22:19.937572 containerd[1471]: time="2025-07-11T00:22:19.937541380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nxg64,Uid:b897360e-69c8-4b60-abf3-671418db329a,Namespace:calico-system,Attempt:1,}" Jul 11 00:22:19.940007 systemd[1]: run-netns-cni\x2dbf63d112\x2d3f28\x2df9dd\x2d95ed\x2dfe1f07381f08.mount: Deactivated successfully. Jul 11 00:22:20.093033 containerd[1471]: time="2025-07-11T00:22:20.092144501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7b457878-lc4l4,Uid:dd2a0a29-d3d9-441d-9a96-7763124a810f,Namespace:calico-system,Attempt:0,}" Jul 11 00:22:20.181295 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:38996.service - OpenSSH per-connection server daemon (10.0.0.1:38996). Jul 11 00:22:20.236459 sshd[4359]: Accepted publickey for core from 10.0.0.1 port 38996 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:20.237522 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:20.244996 systemd-logind[1453]: New session 12 of user core. Jul 11 00:22:20.254018 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:22:20.254762 systemd-networkd[1394]: cali0d7854ba5a1: Link UP Jul 11 00:22:20.257752 systemd-networkd[1394]: cali0d7854ba5a1: Gained carrier Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.082 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--nxg64-eth0 goldmane-768f4c5c69- calico-system b897360e-69c8-4b60-abf3-671418db329a 1103 0 2025-07-11 00:21:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-nxg64 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0d7854ba5a1 [] [] }} ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Namespace="calico-system" Pod="goldmane-768f4c5c69-nxg64" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nxg64-" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.083 [INFO][4325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Namespace="calico-system" Pod="goldmane-768f4c5c69-nxg64" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.140 [INFO][4339] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" HandleID="k8s-pod-network.64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.141 [INFO][4339] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" HandleID="k8s-pod-network.64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-nxg64", "timestamp":"2025-07-11 00:22:20.140959522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.141 [INFO][4339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.141 [INFO][4339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.141 [INFO][4339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.162 [INFO][4339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" host="localhost" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.196 [INFO][4339] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.207 [INFO][4339] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.212 [INFO][4339] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.217 [INFO][4339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.217 [INFO][4339] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" host="localhost" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.220 [INFO][4339] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86 Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.228 [INFO][4339] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" host="localhost" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.241 [INFO][4339] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" host="localhost" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.241 [INFO][4339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" host="localhost" Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.241 [INFO][4339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:20.280906 containerd[1471]: 2025-07-11 00:22:20.241 [INFO][4339] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" HandleID="k8s-pod-network.64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:20.282060 containerd[1471]: 2025-07-11 00:22:20.245 [INFO][4325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Namespace="calico-system" Pod="goldmane-768f4c5c69-nxg64" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--nxg64-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b897360e-69c8-4b60-abf3-671418db329a", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-nxg64", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0d7854ba5a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:20.282060 containerd[1471]: 2025-07-11 00:22:20.246 [INFO][4325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Namespace="calico-system" Pod="goldmane-768f4c5c69-nxg64" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:20.282060 containerd[1471]: 2025-07-11 00:22:20.246 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d7854ba5a1 ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Namespace="calico-system" Pod="goldmane-768f4c5c69-nxg64" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:20.282060 containerd[1471]: 2025-07-11 00:22:20.257 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Namespace="calico-system" Pod="goldmane-768f4c5c69-nxg64" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:20.282060 containerd[1471]: 2025-07-11 00:22:20.258 [INFO][4325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Namespace="calico-system" Pod="goldmane-768f4c5c69-nxg64" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--nxg64-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b897360e-69c8-4b60-abf3-671418db329a", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86", Pod:"goldmane-768f4c5c69-nxg64", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0d7854ba5a1", MAC:"12:d1:52:36:3c:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:20.282060 containerd[1471]: 2025-07-11 00:22:20.277 [INFO][4325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86" Namespace="calico-system" Pod="goldmane-768f4c5c69-nxg64" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:20.566460 containerd[1471]: time="2025-07-11T00:22:20.566349989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:20.566460 containerd[1471]: time="2025-07-11T00:22:20.566428832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:20.569512 containerd[1471]: time="2025-07-11T00:22:20.566969568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.569512 containerd[1471]: time="2025-07-11T00:22:20.567991484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.586050 systemd-networkd[1394]: cali036527a4fb4: Link UP Jul 11 00:22:20.588163 systemd-networkd[1394]: cali036527a4fb4: Gained carrier Jul 11 00:22:20.601466 systemd[1]: Started cri-containerd-64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86.scope - libcontainer container 64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86. Jul 11 00:22:20.621635 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:20.658120 containerd[1471]: time="2025-07-11T00:22:20.657931155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-nxg64,Uid:b897360e-69c8-4b60-abf3-671418db329a,Namespace:calico-system,Attempt:1,} returns sandbox id \"64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86\"" Jul 11 00:22:20.659775 containerd[1471]: time="2025-07-11T00:22:20.659631673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.209 [INFO][4347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6d7b457878--lc4l4-eth0 whisker-6d7b457878- calico-system dd2a0a29-d3d9-441d-9a96-7763124a810f 1105 0 2025-07-11 00:22:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d7b457878 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6d7b457878-lc4l4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali036527a4fb4 [] [] }} ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Namespace="calico-system" Pod="whisker-6d7b457878-lc4l4" WorkloadEndpoint="localhost-k8s-whisker--6d7b457878--lc4l4-" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.209 [INFO][4347] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Namespace="calico-system" Pod="whisker-6d7b457878-lc4l4" WorkloadEndpoint="localhost-k8s-whisker--6d7b457878--lc4l4-eth0" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.270 [INFO][4365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" HandleID="k8s-pod-network.b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Workload="localhost-k8s-whisker--6d7b457878--lc4l4-eth0" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.270 [INFO][4365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" HandleID="k8s-pod-network.b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Workload="localhost-k8s-whisker--6d7b457878--lc4l4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6d7b457878-lc4l4", "timestamp":"2025-07-11 00:22:20.270343594 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.271 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.271 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.271 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.292 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" host="localhost" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.301 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.308 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.312 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.316 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.316 [INFO][4365] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" host="localhost" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.319 [INFO][4365] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843 Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.370 [INFO][4365] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" host="localhost" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.575 [INFO][4365] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" host="localhost" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.576 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" host="localhost" Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.576 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:20.767089 containerd[1471]: 2025-07-11 00:22:20.576 [INFO][4365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" HandleID="k8s-pod-network.b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Workload="localhost-k8s-whisker--6d7b457878--lc4l4-eth0" Jul 11 00:22:20.768173 containerd[1471]: 2025-07-11 00:22:20.581 [INFO][4347] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Namespace="calico-system" Pod="whisker-6d7b457878-lc4l4" WorkloadEndpoint="localhost-k8s-whisker--6d7b457878--lc4l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d7b457878--lc4l4-eth0", GenerateName:"whisker-6d7b457878-", Namespace:"calico-system", SelfLink:"", UID:"dd2a0a29-d3d9-441d-9a96-7763124a810f", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d7b457878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6d7b457878-lc4l4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali036527a4fb4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:20.768173 containerd[1471]: 2025-07-11 00:22:20.581 [INFO][4347] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Namespace="calico-system" Pod="whisker-6d7b457878-lc4l4" WorkloadEndpoint="localhost-k8s-whisker--6d7b457878--lc4l4-eth0" Jul 11 00:22:20.768173 containerd[1471]: 2025-07-11 00:22:20.581 [INFO][4347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali036527a4fb4 ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Namespace="calico-system" Pod="whisker-6d7b457878-lc4l4" WorkloadEndpoint="localhost-k8s-whisker--6d7b457878--lc4l4-eth0" Jul 11 00:22:20.768173 containerd[1471]: 2025-07-11 00:22:20.589 [INFO][4347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Namespace="calico-system" Pod="whisker-6d7b457878-lc4l4" WorkloadEndpoint="localhost-k8s-whisker--6d7b457878--lc4l4-eth0" Jul 11 00:22:20.768173 containerd[1471]: 2025-07-11 00:22:20.590 [INFO][4347] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Namespace="calico-system" Pod="whisker-6d7b457878-lc4l4" WorkloadEndpoint="localhost-k8s-whisker--6d7b457878--lc4l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d7b457878--lc4l4-eth0", GenerateName:"whisker-6d7b457878-", Namespace:"calico-system", SelfLink:"", UID:"dd2a0a29-d3d9-441d-9a96-7763124a810f", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d7b457878", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843", Pod:"whisker-6d7b457878-lc4l4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali036527a4fb4", MAC:"de:01:37:ea:05:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:20.768173 containerd[1471]: 2025-07-11 00:22:20.760 [INFO][4347] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843" Namespace="calico-system" Pod="whisker-6d7b457878-lc4l4" WorkloadEndpoint="localhost-k8s-whisker--6d7b457878--lc4l4-eth0" Jul 11 00:22:20.781405 sshd[4359]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:20.788532 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:38996.service: Deactivated successfully. Jul 11 00:22:20.791821 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:22:20.794767 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:22:20.798489 systemd-logind[1453]: Removed session 12. Jul 11 00:22:20.820526 containerd[1471]: time="2025-07-11T00:22:20.820275798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:20.820526 containerd[1471]: time="2025-07-11T00:22:20.820357887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:20.820526 containerd[1471]: time="2025-07-11T00:22:20.820373938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.820526 containerd[1471]: time="2025-07-11T00:22:20.820495002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:20.843243 systemd[1]: Started cri-containerd-b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843.scope - libcontainer container b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843. Jul 11 00:22:20.857163 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:20.887638 containerd[1471]: time="2025-07-11T00:22:20.887570477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7b457878-lc4l4,Uid:dd2a0a29-d3d9-441d-9a96-7763124a810f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843\"" Jul 11 00:22:21.503543 containerd[1471]: time="2025-07-11T00:22:21.503478220Z" level=info msg="StopPodSandbox for \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\"" Jul 11 00:22:21.503765 containerd[1471]: time="2025-07-11T00:22:21.503502917Z" level=info msg="StopPodSandbox for \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\"" Jul 11 00:22:21.521496 systemd-networkd[1394]: cali0d7854ba5a1: Gained IPv6LL Jul 11 00:22:21.649358 systemd-networkd[1394]: cali036527a4fb4: Gained IPv6LL Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.675 [INFO][4535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.675 [INFO][4535] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" iface="eth0" netns="/var/run/netns/cni-5a822084-2f91-950e-3db0-1b6d1d3752f5" Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.676 [INFO][4535] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" iface="eth0" netns="/var/run/netns/cni-5a822084-2f91-950e-3db0-1b6d1d3752f5" Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.676 [INFO][4535] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" iface="eth0" netns="/var/run/netns/cni-5a822084-2f91-950e-3db0-1b6d1d3752f5" Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.676 [INFO][4535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.676 [INFO][4535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.706 [INFO][4551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" HandleID="k8s-pod-network.00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.706 [INFO][4551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.706 [INFO][4551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.721 [WARNING][4551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" HandleID="k8s-pod-network.00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.721 [INFO][4551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" HandleID="k8s-pod-network.00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.725 [INFO][4551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:21.735833 containerd[1471]: 2025-07-11 00:22:21.731 [INFO][4535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:21.737216 containerd[1471]: time="2025-07-11T00:22:21.737165430Z" level=info msg="TearDown network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\" successfully" Jul 11 00:22:21.737216 containerd[1471]: time="2025-07-11T00:22:21.737201339Z" level=info msg="StopPodSandbox for \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\" returns successfully" Jul 11 00:22:21.739116 containerd[1471]: time="2025-07-11T00:22:21.738639078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24rnq,Uid:ef0f4240-50c1-431a-b911-54802b65a3ca,Namespace:calico-system,Attempt:1,}" Jul 11 00:22:21.740380 systemd[1]: run-netns-cni\x2d5a822084\x2d2f91\x2d950e\x2d3db0\x2d1b6d1d3752f5.mount: Deactivated successfully. Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.722 [INFO][4534] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.722 [INFO][4534] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" iface="eth0" netns="/var/run/netns/cni-b360661c-7cd7-34c7-8031-112fb026d029" Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.722 [INFO][4534] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" iface="eth0" netns="/var/run/netns/cni-b360661c-7cd7-34c7-8031-112fb026d029" Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.723 [INFO][4534] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" iface="eth0" netns="/var/run/netns/cni-b360661c-7cd7-34c7-8031-112fb026d029" Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.723 [INFO][4534] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.723 [INFO][4534] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.752 [INFO][4560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" HandleID="k8s-pod-network.55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.752 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.752 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.760 [WARNING][4560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" HandleID="k8s-pod-network.55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.761 [INFO][4560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" HandleID="k8s-pod-network.55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.764 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:21.775342 containerd[1471]: 2025-07-11 00:22:21.768 [INFO][4534] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:21.775342 containerd[1471]: time="2025-07-11T00:22:21.773117714Z" level=info msg="TearDown network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\" successfully" Jul 11 00:22:21.775342 containerd[1471]: time="2025-07-11T00:22:21.773165266Z" level=info msg="StopPodSandbox for \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\" returns successfully" Jul 11 00:22:21.775342 containerd[1471]: time="2025-07-11T00:22:21.774710683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zhf8q,Uid:4fb87ad9-16fb-494a-87eb-605af4502d26,Namespace:kube-system,Attempt:1,}" Jul 11 00:22:21.776321 kubelet[2568]: E0711 00:22:21.773718 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:21.781122 systemd[1]: run-netns-cni\x2db360661c\x2d7cd7\x2d34c7\x2d8031\x2d112fb026d029.mount: Deactivated successfully. Jul 11 00:22:22.501579 containerd[1471]: time="2025-07-11T00:22:22.501520640Z" level=info msg="StopPodSandbox for \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\"" Jul 11 00:22:22.503133 containerd[1471]: time="2025-07-11T00:22:22.501843704Z" level=info msg="StopPodSandbox for \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\"" Jul 11 00:22:22.921807 systemd-networkd[1394]: cali5080cd91dc2: Link UP Jul 11 00:22:22.928798 systemd-networkd[1394]: cali5080cd91dc2: Gained carrier Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.371 [INFO][4574] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--24rnq-eth0 csi-node-driver- calico-system ef0f4240-50c1-431a-b911-54802b65a3ca 1122 0 2025-07-11 00:21:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-24rnq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5080cd91dc2 [] [] }} ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Namespace="calico-system" Pod="csi-node-driver-24rnq" WorkloadEndpoint="localhost-k8s-csi--node--driver--24rnq-" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.372 [INFO][4574] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Namespace="calico-system" Pod="csi-node-driver-24rnq" WorkloadEndpoint="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.422 [INFO][4597] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" HandleID="k8s-pod-network.3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.422 [INFO][4597] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" HandleID="k8s-pod-network.3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-24rnq", "timestamp":"2025-07-11 00:22:22.422318113 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.423 [INFO][4597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.423 [INFO][4597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.423 [INFO][4597] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.435 [INFO][4597] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" host="localhost" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.452 [INFO][4597] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.464 [INFO][4597] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.468 [INFO][4597] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.519 [INFO][4597] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.519 [INFO][4597] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" host="localhost" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.629 [INFO][4597] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3 Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.891 [INFO][4597] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" host="localhost" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.905 [INFO][4597] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" host="localhost" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.905 [INFO][4597] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" host="localhost" Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.905 [INFO][4597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:22.968335 containerd[1471]: 2025-07-11 00:22:22.905 [INFO][4597] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" HandleID="k8s-pod-network.3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:22.970118 containerd[1471]: 2025-07-11 00:22:22.913 [INFO][4574] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Namespace="calico-system" Pod="csi-node-driver-24rnq" WorkloadEndpoint="localhost-k8s-csi--node--driver--24rnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--24rnq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef0f4240-50c1-431a-b911-54802b65a3ca", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-24rnq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5080cd91dc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:22.970118 containerd[1471]: 2025-07-11 00:22:22.914 [INFO][4574] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Namespace="calico-system" Pod="csi-node-driver-24rnq" WorkloadEndpoint="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:22.970118 containerd[1471]: 2025-07-11 00:22:22.914 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5080cd91dc2 ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Namespace="calico-system" Pod="csi-node-driver-24rnq" WorkloadEndpoint="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:22.970118 containerd[1471]: 2025-07-11 00:22:22.929 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Namespace="calico-system" Pod="csi-node-driver-24rnq" WorkloadEndpoint="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:22.970118 containerd[1471]: 2025-07-11 00:22:22.930 [INFO][4574] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Namespace="calico-system" Pod="csi-node-driver-24rnq" WorkloadEndpoint="localhost-k8s-csi--node--driver--24rnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--24rnq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef0f4240-50c1-431a-b911-54802b65a3ca", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3", Pod:"csi-node-driver-24rnq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5080cd91dc2", MAC:"de:da:10:e4:c0:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:22.970118 containerd[1471]: 2025-07-11 00:22:22.964 [INFO][4574] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3" Namespace="calico-system" Pod="csi-node-driver-24rnq" WorkloadEndpoint="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:23.031416 systemd-networkd[1394]: calia53a1315ac0: Link UP Jul 11 00:22:23.031975 systemd-networkd[1394]: calia53a1315ac0: Gained carrier Jul 11 00:22:23.066258 containerd[1471]: time="2025-07-11T00:22:23.065825640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:23.066258 containerd[1471]: time="2025-07-11T00:22:23.065962725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:23.066258 containerd[1471]: time="2025-07-11T00:22:23.065982383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:23.066258 containerd[1471]: time="2025-07-11T00:22:23.066159165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:22.892 [INFO][4634] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:22.893 [INFO][4634] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" iface="eth0" netns="/var/run/netns/cni-427aa144-ca61-379c-ee85-6c4b8aa9c562" Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:22.893 [INFO][4634] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" iface="eth0" netns="/var/run/netns/cni-427aa144-ca61-379c-ee85-6c4b8aa9c562" Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:22.894 [INFO][4634] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" iface="eth0" netns="/var/run/netns/cni-427aa144-ca61-379c-ee85-6c4b8aa9c562" Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:22.894 [INFO][4634] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:22.894 [INFO][4634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:22.949 [INFO][4653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" HandleID="k8s-pod-network.93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:22.950 [INFO][4653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:23.022 [INFO][4653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:23.035 [WARNING][4653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" HandleID="k8s-pod-network.93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:23.035 [INFO][4653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" HandleID="k8s-pod-network.93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:23.037 [INFO][4653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:23.074701 containerd[1471]: 2025-07-11 00:22:23.042 [INFO][4634] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:23.075559 containerd[1471]: time="2025-07-11T00:22:23.074989829Z" level=info msg="TearDown network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\" successfully" Jul 11 00:22:23.075559 containerd[1471]: time="2025-07-11T00:22:23.075045005Z" level=info msg="StopPodSandbox for \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\" returns successfully" Jul 11 00:22:23.089332 systemd[1]: run-netns-cni\x2d427aa144\x2dca61\x2d379c\x2dee85\x2d6c4b8aa9c562.mount: Deactivated successfully. Jul 11 00:22:23.090130 kubelet[2568]: E0711 00:22:23.089917 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:23.091889 containerd[1471]: time="2025-07-11T00:22:23.091237006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hsp4s,Uid:85a64a17-b3e6-422f-9756-cb2f80a1643b,Namespace:kube-system,Attempt:1,}" Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:22.892 [INFO][4635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:22.894 [INFO][4635] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" iface="eth0" netns="/var/run/netns/cni-b3b1720a-b4d4-7357-c582-4d21cad614d8" Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:22.895 [INFO][4635] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" iface="eth0" netns="/var/run/netns/cni-b3b1720a-b4d4-7357-c582-4d21cad614d8" Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:22.896 [INFO][4635] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" iface="eth0" netns="/var/run/netns/cni-b3b1720a-b4d4-7357-c582-4d21cad614d8" Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:22.896 [INFO][4635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:22.896 [INFO][4635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:22.952 [INFO][4655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" HandleID="k8s-pod-network.2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:22.953 [INFO][4655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:23.037 [INFO][4655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:23.071 [WARNING][4655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" HandleID="k8s-pod-network.2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:23.071 [INFO][4655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" HandleID="k8s-pod-network.2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:23.089 [INFO][4655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:23.133397 containerd[1471]: 2025-07-11 00:22:23.108 [INFO][4635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.381 [INFO][4579] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0 coredns-674b8bbfcf- kube-system 4fb87ad9-16fb-494a-87eb-605af4502d26 1123 0 2025-07-11 00:20:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zhf8q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia53a1315ac0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhf8q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhf8q-" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.381 [INFO][4579] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhf8q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.435 [INFO][4603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" HandleID="k8s-pod-network.badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.436 [INFO][4603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" HandleID="k8s-pod-network.badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zhf8q", "timestamp":"2025-07-11 00:22:22.4358304 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.436 [INFO][4603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.906 [INFO][4603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.906 [INFO][4603] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.919 [INFO][4603] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" host="localhost" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.934 [INFO][4603] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.971 [INFO][4603] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.975 [INFO][4603] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.980 [INFO][4603] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.980 [INFO][4603] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" host="localhost" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:22.996 [INFO][4603] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0 Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:23.011 [INFO][4603] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" host="localhost" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:23.022 [INFO][4603] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" host="localhost" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:23.022 [INFO][4603] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" host="localhost" Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:23.022 [INFO][4603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:23.134639 containerd[1471]: 2025-07-11 00:22:23.022 [INFO][4603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" HandleID="k8s-pod-network.badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:23.135346 containerd[1471]: 2025-07-11 00:22:23.026 [INFO][4579] cni-plugin/k8s.go 418: Populated endpoint ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhf8q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4fb87ad9-16fb-494a-87eb-605af4502d26", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zhf8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53a1315ac0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:23.135346 containerd[1471]: 2025-07-11 00:22:23.026 [INFO][4579] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhf8q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:23.135346 containerd[1471]: 2025-07-11 00:22:23.026 [INFO][4579] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia53a1315ac0 ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhf8q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:23.135346 containerd[1471]: 2025-07-11 00:22:23.034 [INFO][4579] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhf8q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:23.135346 containerd[1471]: 2025-07-11 00:22:23.034 [INFO][4579] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhf8q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4fb87ad9-16fb-494a-87eb-605af4502d26", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0", Pod:"coredns-674b8bbfcf-zhf8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53a1315ac0", MAC:"fa:90:ea:75:ef:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:23.135346 containerd[1471]: 2025-07-11 00:22:23.097 [INFO][4579] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhf8q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:23.138550 containerd[1471]: time="2025-07-11T00:22:23.134599290Z" level=info msg="TearDown network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\" successfully" Jul 11 00:22:23.138550 containerd[1471]: time="2025-07-11T00:22:23.136995608Z" level=info msg="StopPodSandbox for \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\" returns successfully" Jul 11 00:22:23.146665 containerd[1471]: time="2025-07-11T00:22:23.146602091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fbf9d5d8f-zclpr,Uid:11afdd52-9586-41ad-b277-069a0e6d90ba,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:22:23.149738 systemd[1]: Started cri-containerd-3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3.scope - libcontainer container 3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3. Jul 11 00:22:23.171842 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:23.197749 containerd[1471]: time="2025-07-11T00:22:23.195681285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-24rnq,Uid:ef0f4240-50c1-431a-b911-54802b65a3ca,Namespace:calico-system,Attempt:1,} returns sandbox id \"3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3\"" Jul 11 00:22:23.203159 containerd[1471]: time="2025-07-11T00:22:23.202886942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:23.203159 containerd[1471]: time="2025-07-11T00:22:23.203058573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:23.204131 containerd[1471]: time="2025-07-11T00:22:23.203363592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:23.204291 containerd[1471]: time="2025-07-11T00:22:23.204238712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:23.231315 systemd[1]: Started cri-containerd-badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0.scope - libcontainer container badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0. Jul 11 00:22:23.249819 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:23.286020 containerd[1471]: time="2025-07-11T00:22:23.285918828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zhf8q,Uid:4fb87ad9-16fb-494a-87eb-605af4502d26,Namespace:kube-system,Attempt:1,} returns sandbox id \"badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0\"" Jul 11 00:22:23.287126 kubelet[2568]: E0711 00:22:23.286866 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:23.317605 systemd[1]: run-netns-cni\x2db3b1720a\x2db4d4\x2d7357\x2dc582\x2d4d21cad614d8.mount: Deactivated successfully. Jul 11 00:22:23.501411 containerd[1471]: time="2025-07-11T00:22:23.501349711Z" level=info msg="StopPodSandbox for \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\"" Jul 11 00:22:23.501653 containerd[1471]: time="2025-07-11T00:22:23.501626986Z" level=info msg="StopPodSandbox for \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\"" Jul 11 00:22:23.839800 containerd[1471]: time="2025-07-11T00:22:23.839514095Z" level=info msg="CreateContainer within sandbox \"badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:22:24.209438 systemd-networkd[1394]: calia53a1315ac0: Gained IPv6LL Jul 11 00:22:24.465370 systemd-networkd[1394]: cali5080cd91dc2: Gained IPv6LL Jul 11 00:22:24.818558 systemd-networkd[1394]: cali0aae2711af3: Link UP Jul 11 00:22:24.820685 systemd-networkd[1394]: cali0aae2711af3: Gained carrier Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.535 [INFO][4799] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.536 [INFO][4799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" iface="eth0" netns="/var/run/netns/cni-b6a8a00e-9827-3d1b-c799-bda990efd194" Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.536 [INFO][4799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" iface="eth0" netns="/var/run/netns/cni-b6a8a00e-9827-3d1b-c799-bda990efd194" Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.536 [INFO][4799] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" iface="eth0" netns="/var/run/netns/cni-b6a8a00e-9827-3d1b-c799-bda990efd194" Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.536 [INFO][4799] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.536 [INFO][4799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.562 [INFO][4832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" HandleID="k8s-pod-network.8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.563 [INFO][4832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.810 [INFO][4832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.817 [WARNING][4832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" HandleID="k8s-pod-network.8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.818 [INFO][4832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" HandleID="k8s-pod-network.8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.820 [INFO][4832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:24.831960 containerd[1471]: 2025-07-11 00:22:24.824 [INFO][4799] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:24.831789 systemd[1]: run-netns-cni\x2db6a8a00e\x2d9827\x2d3d1b\x2dc799\x2dbda990efd194.mount: Deactivated successfully. Jul 11 00:22:24.833807 containerd[1471]: time="2025-07-11T00:22:24.833731339Z" level=info msg="TearDown network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\" successfully" Jul 11 00:22:24.833807 containerd[1471]: time="2025-07-11T00:22:24.833763321Z" level=info msg="StopPodSandbox for \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\" returns successfully" Jul 11 00:22:24.834791 containerd[1471]: time="2025-07-11T00:22:24.834746959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fbf9d5d8f-67gld,Uid:1285ec7c-afc4-4f44-b914-280d299b3f6e,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:23.620 [INFO][4763] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0 coredns-674b8bbfcf- kube-system 85a64a17-b3e6-422f-9756-cb2f80a1643b 1134 0 2025-07-11 00:20:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-hsp4s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0aae2711af3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-hsp4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hsp4s-" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:23.624 [INFO][4763] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-hsp4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:23.904 [INFO][4822] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" HandleID="k8s-pod-network.9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:23.904 [INFO][4822] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" HandleID="k8s-pod-network.9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004941c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-hsp4s", "timestamp":"2025-07-11 00:22:23.904100708 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:23.904 [INFO][4822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:23.904 [INFO][4822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:23.904 [INFO][4822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.631 [INFO][4822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" host="localhost" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.728 [INFO][4822] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.735 [INFO][4822] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.737 [INFO][4822] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.740 [INFO][4822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.740 [INFO][4822] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" host="localhost" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.741 [INFO][4822] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1 Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.775 [INFO][4822] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" host="localhost" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.810 [INFO][4822] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" host="localhost" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.810 [INFO][4822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" host="localhost" Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.810 [INFO][4822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:25.103421 containerd[1471]: 2025-07-11 00:22:24.810 [INFO][4822] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" HandleID="k8s-pod-network.9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:25.104544 containerd[1471]: 2025-07-11 00:22:24.814 [INFO][4763] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-hsp4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"85a64a17-b3e6-422f-9756-cb2f80a1643b", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-hsp4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aae2711af3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:25.104544 containerd[1471]: 2025-07-11 00:22:24.814 [INFO][4763] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-hsp4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:25.104544 containerd[1471]: 2025-07-11 00:22:24.814 [INFO][4763] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0aae2711af3 ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-hsp4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:25.104544 containerd[1471]: 2025-07-11 00:22:24.819 [INFO][4763] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-hsp4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:25.104544 containerd[1471]: 2025-07-11 00:22:24.819 [INFO][4763] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-hsp4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"85a64a17-b3e6-422f-9756-cb2f80a1643b", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1", Pod:"coredns-674b8bbfcf-hsp4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aae2711af3", MAC:"ee:d5:95:ea:fe:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:25.104544 containerd[1471]: 2025-07-11 00:22:24.996 [INFO][4763] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1" Namespace="kube-system" Pod="coredns-674b8bbfcf-hsp4s" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:24.535 [INFO][4807] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:24.536 [INFO][4807] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" iface="eth0" netns="/var/run/netns/cni-6c6d6c6e-ed75-70a3-114f-0b334a423445" Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:24.536 [INFO][4807] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" iface="eth0" netns="/var/run/netns/cni-6c6d6c6e-ed75-70a3-114f-0b334a423445" Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:24.536 [INFO][4807] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" iface="eth0" netns="/var/run/netns/cni-6c6d6c6e-ed75-70a3-114f-0b334a423445" Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:24.536 [INFO][4807] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:24.536 [INFO][4807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:24.568 [INFO][4834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" HandleID="k8s-pod-network.6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:24.568 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:24.820 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:24.996 [WARNING][4834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" HandleID="k8s-pod-network.6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:25.099 [INFO][4834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" HandleID="k8s-pod-network.6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:25.149 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:25.156413 containerd[1471]: 2025-07-11 00:22:25.152 [INFO][4807] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:25.157671 containerd[1471]: time="2025-07-11T00:22:25.156912370Z" level=info msg="TearDown network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\" successfully" Jul 11 00:22:25.157671 containerd[1471]: time="2025-07-11T00:22:25.156952658Z" level=info msg="StopPodSandbox for \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\" returns successfully" Jul 11 00:22:25.158312 containerd[1471]: time="2025-07-11T00:22:25.157975121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ddcd977db-n58hn,Uid:86e94e6b-c0ad-463c-abf5-b6899adb9e4c,Namespace:calico-system,Attempt:1,}" Jul 11 00:22:25.160729 systemd[1]: run-netns-cni\x2d6c6d6c6e\x2ded75\x2d70a3\x2d114f\x2d0b334a423445.mount: Deactivated successfully. Jul 11 00:22:25.796534 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:39008.service - OpenSSH per-connection server daemon (10.0.0.1:39008). Jul 11 00:22:26.002256 systemd-networkd[1394]: cali0aae2711af3: Gained IPv6LL Jul 11 00:22:26.118179 sshd[4859]: Accepted publickey for core from 10.0.0.1 port 39008 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:26.120233 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:26.138828 systemd-logind[1453]: New session 13 of user core. Jul 11 00:22:26.142524 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:22:26.231260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365905868.mount: Deactivated successfully. Jul 11 00:22:26.639961 containerd[1471]: time="2025-07-11T00:22:26.639788594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:26.639961 containerd[1471]: time="2025-07-11T00:22:26.639861966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:26.639961 containerd[1471]: time="2025-07-11T00:22:26.639882976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:26.640593 containerd[1471]: time="2025-07-11T00:22:26.639996215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:26.681308 systemd[1]: Started cri-containerd-9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1.scope - libcontainer container 9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1. Jul 11 00:22:26.773705 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:26.800619 containerd[1471]: time="2025-07-11T00:22:26.800567951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hsp4s,Uid:85a64a17-b3e6-422f-9756-cb2f80a1643b,Namespace:kube-system,Attempt:1,} returns sandbox id \"9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1\"" Jul 11 00:22:26.801586 kubelet[2568]: E0711 00:22:26.801561 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:26.807802 sshd[4859]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:26.812677 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:39008.service: Deactivated successfully. Jul 11 00:22:26.815757 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:22:26.816771 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:22:26.818227 systemd-logind[1453]: Removed session 13. Jul 11 00:22:28.476420 containerd[1471]: time="2025-07-11T00:22:28.476354200Z" level=info msg="CreateContainer within sandbox \"9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:22:28.590681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751626220.mount: Deactivated successfully. Jul 11 00:22:28.595444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2061969114.mount: Deactivated successfully. Jul 11 00:22:28.672150 containerd[1471]: time="2025-07-11T00:22:28.672045201Z" level=info msg="CreateContainer within sandbox \"badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b6d6ca9f4b4b4135ae6c13538f4cf534572f55424f0eba3050662b7d27fe6fb\"" Jul 11 00:22:28.675480 containerd[1471]: time="2025-07-11T00:22:28.675365882Z" level=info msg="StartContainer for \"1b6d6ca9f4b4b4135ae6c13538f4cf534572f55424f0eba3050662b7d27fe6fb\"" Jul 11 00:22:28.693007 containerd[1471]: time="2025-07-11T00:22:28.692940165Z" level=info msg="CreateContainer within sandbox \"9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6d591af58b8c144a391d3e380c772b8ec85ac125a476fefd15d6a91c7663c6a6\"" Jul 11 00:22:28.694985 containerd[1471]: time="2025-07-11T00:22:28.694254578Z" level=info msg="StartContainer for \"6d591af58b8c144a391d3e380c772b8ec85ac125a476fefd15d6a91c7663c6a6\"" Jul 11 00:22:28.766665 systemd[1]: Started cri-containerd-1b6d6ca9f4b4b4135ae6c13538f4cf534572f55424f0eba3050662b7d27fe6fb.scope - libcontainer container 1b6d6ca9f4b4b4135ae6c13538f4cf534572f55424f0eba3050662b7d27fe6fb. Jul 11 00:22:28.816323 systemd[1]: Started cri-containerd-6d591af58b8c144a391d3e380c772b8ec85ac125a476fefd15d6a91c7663c6a6.scope - libcontainer container 6d591af58b8c144a391d3e380c772b8ec85ac125a476fefd15d6a91c7663c6a6. Jul 11 00:22:29.152401 containerd[1471]: time="2025-07-11T00:22:29.152234179Z" level=info msg="StartContainer for \"6d591af58b8c144a391d3e380c772b8ec85ac125a476fefd15d6a91c7663c6a6\" returns successfully" Jul 11 00:22:29.152821 containerd[1471]: time="2025-07-11T00:22:29.152784409Z" level=info msg="StartContainer for \"1b6d6ca9f4b4b4135ae6c13538f4cf534572f55424f0eba3050662b7d27fe6fb\" returns successfully" Jul 11 00:22:29.187928 systemd-networkd[1394]: cali81190d58625: Link UP Jul 11 00:22:29.190198 systemd-networkd[1394]: cali81190d58625: Gained carrier Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:28.685 [INFO][4923] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0 calico-apiserver-6fbf9d5d8f- calico-apiserver 11afdd52-9586-41ad-b277-069a0e6d90ba 1135 0 2025-07-11 00:21:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fbf9d5d8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fbf9d5d8f-zclpr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali81190d58625 [] [] }} ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-zclpr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:28.685 [INFO][4923] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-zclpr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:28.831 [INFO][4980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" HandleID="k8s-pod-network.e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:28.833 [INFO][4980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" HandleID="k8s-pod-network.e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fbf9d5d8f-zclpr", "timestamp":"2025-07-11 00:22:28.83160387 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:28.835 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:28.835 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:28.835 [INFO][4980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:28.874 [INFO][4980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" host="localhost" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:28.970 [INFO][4980] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:29.105 [INFO][4980] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:29.108 [INFO][4980] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:29.111 [INFO][4980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:29.111 [INFO][4980] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" host="localhost" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:29.113 [INFO][4980] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086 Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:29.154 [INFO][4980] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" host="localhost" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:29.177 [INFO][4980] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" host="localhost" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:29.178 [INFO][4980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" host="localhost" Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:29.178 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:29.220716 containerd[1471]: 2025-07-11 00:22:29.178 [INFO][4980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" HandleID="k8s-pod-network.e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:29.221620 containerd[1471]: 2025-07-11 00:22:29.182 [INFO][4923] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-zclpr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0", GenerateName:"calico-apiserver-6fbf9d5d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"11afdd52-9586-41ad-b277-069a0e6d90ba", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fbf9d5d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fbf9d5d8f-zclpr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81190d58625", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:29.221620 containerd[1471]: 2025-07-11 00:22:29.183 [INFO][4923] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-zclpr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:29.221620 containerd[1471]: 2025-07-11 00:22:29.183 [INFO][4923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81190d58625 ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-zclpr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:29.221620 containerd[1471]: 2025-07-11 00:22:29.189 [INFO][4923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-zclpr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:29.221620 containerd[1471]: 2025-07-11 00:22:29.192 [INFO][4923] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-zclpr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0", GenerateName:"calico-apiserver-6fbf9d5d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"11afdd52-9586-41ad-b277-069a0e6d90ba", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fbf9d5d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086", Pod:"calico-apiserver-6fbf9d5d8f-zclpr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81190d58625", MAC:"0a:3e:71:c6:5d:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:29.221620 containerd[1471]: 2025-07-11 00:22:29.210 [INFO][4923] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-zclpr" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:29.267420 containerd[1471]: time="2025-07-11T00:22:29.266348387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:29.267420 containerd[1471]: time="2025-07-11T00:22:29.266437500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:29.267420 containerd[1471]: time="2025-07-11T00:22:29.266453440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:29.267420 containerd[1471]: time="2025-07-11T00:22:29.266579693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:29.285495 systemd-networkd[1394]: cali2091a8857c9: Link UP Jul 11 00:22:29.286602 systemd-networkd[1394]: cali2091a8857c9: Gained carrier Jul 11 00:22:29.308617 systemd[1]: Started cri-containerd-e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086.scope - libcontainer container e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086. Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:28.813 [INFO][4964] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0 calico-kube-controllers-7ddcd977db- calico-system 86e94e6b-c0ad-463c-abf5-b6899adb9e4c 1151 0 2025-07-11 00:21:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7ddcd977db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7ddcd977db-n58hn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2091a8857c9 [] [] }} ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Namespace="calico-system" Pod="calico-kube-controllers-7ddcd977db-n58hn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:28.817 [INFO][4964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Namespace="calico-system" Pod="calico-kube-controllers-7ddcd977db-n58hn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:28.910 [INFO][5035] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" HandleID="k8s-pod-network.7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:28.910 [INFO][5035] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" HandleID="k8s-pod-network.7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7ddcd977db-n58hn", "timestamp":"2025-07-11 00:22:28.91041336 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:28.910 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.178 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.178 [INFO][5035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.190 [INFO][5035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" host="localhost" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.214 [INFO][5035] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.224 [INFO][5035] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.229 [INFO][5035] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.235 [INFO][5035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.235 [INFO][5035] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" host="localhost" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.238 [INFO][5035] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.249 [INFO][5035] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" host="localhost" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.264 [INFO][5035] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" host="localhost" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.265 [INFO][5035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" host="localhost" Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.265 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:29.324124 containerd[1471]: 2025-07-11 00:22:29.266 [INFO][5035] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" HandleID="k8s-pod-network.7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:29.324943 containerd[1471]: 2025-07-11 00:22:29.274 [INFO][4964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Namespace="calico-system" Pod="calico-kube-controllers-7ddcd977db-n58hn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0", GenerateName:"calico-kube-controllers-7ddcd977db-", Namespace:"calico-system", SelfLink:"", UID:"86e94e6b-c0ad-463c-abf5-b6899adb9e4c", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ddcd977db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7ddcd977db-n58hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2091a8857c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:29.324943 containerd[1471]: 2025-07-11 00:22:29.274 [INFO][4964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Namespace="calico-system" Pod="calico-kube-controllers-7ddcd977db-n58hn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:29.324943 containerd[1471]: 2025-07-11 00:22:29.274 [INFO][4964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2091a8857c9 ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Namespace="calico-system" Pod="calico-kube-controllers-7ddcd977db-n58hn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:29.324943 containerd[1471]: 2025-07-11 00:22:29.288 [INFO][4964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Namespace="calico-system" Pod="calico-kube-controllers-7ddcd977db-n58hn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:29.324943 containerd[1471]: 2025-07-11 00:22:29.289 [INFO][4964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Namespace="calico-system" Pod="calico-kube-controllers-7ddcd977db-n58hn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0", GenerateName:"calico-kube-controllers-7ddcd977db-", Namespace:"calico-system", SelfLink:"", UID:"86e94e6b-c0ad-463c-abf5-b6899adb9e4c", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ddcd977db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce", Pod:"calico-kube-controllers-7ddcd977db-n58hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2091a8857c9", MAC:"0e:d1:2d:28:20:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:29.324943 containerd[1471]: 2025-07-11 00:22:29.317 [INFO][4964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce" Namespace="calico-system" Pod="calico-kube-controllers-7ddcd977db-n58hn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:29.353726 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:29.380543 containerd[1471]: time="2025-07-11T00:22:29.379941092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:29.380543 containerd[1471]: time="2025-07-11T00:22:29.380044921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:29.380543 containerd[1471]: time="2025-07-11T00:22:29.380060301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:29.380543 containerd[1471]: time="2025-07-11T00:22:29.380201403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:29.446237 systemd[1]: Started cri-containerd-7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce.scope - libcontainer container 7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce. Jul 11 00:22:29.449995 containerd[1471]: time="2025-07-11T00:22:29.449876178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fbf9d5d8f-zclpr,Uid:11afdd52-9586-41ad-b277-069a0e6d90ba,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086\"" Jul 11 00:22:29.481142 systemd-networkd[1394]: cali58b052721bb: Link UP Jul 11 00:22:29.482805 systemd-networkd[1394]: cali58b052721bb: Gained carrier Jul 11 00:22:29.485948 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:28.822 [INFO][4938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0 calico-apiserver-6fbf9d5d8f- calico-apiserver 1285ec7c-afc4-4f44-b914-280d299b3f6e 1150 0 2025-07-11 00:21:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6fbf9d5d8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6fbf9d5d8f-67gld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali58b052721bb [] [] }} ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-67gld" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:28.822 [INFO][4938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-67gld" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:28.987 [INFO][5037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" HandleID="k8s-pod-network.03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.010 [INFO][5037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" HandleID="k8s-pod-network.03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001033f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6fbf9d5d8f-67gld", "timestamp":"2025-07-11 00:22:28.987882607 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.011 [INFO][5037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.265 [INFO][5037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.265 [INFO][5037] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.293 [INFO][5037] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" host="localhost" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.318 [INFO][5037] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.347 [INFO][5037] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.351 [INFO][5037] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.356 [INFO][5037] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.356 [INFO][5037] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" host="localhost" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.360 [INFO][5037] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08 Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.375 [INFO][5037] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" host="localhost" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.411 [INFO][5037] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" host="localhost" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.413 [INFO][5037] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" host="localhost" Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.417 [INFO][5037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:29.512104 containerd[1471]: 2025-07-11 00:22:29.417 [INFO][5037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" HandleID="k8s-pod-network.03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:29.515448 containerd[1471]: 2025-07-11 00:22:29.473 [INFO][4938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-67gld" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0", GenerateName:"calico-apiserver-6fbf9d5d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1285ec7c-afc4-4f44-b914-280d299b3f6e", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fbf9d5d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6fbf9d5d8f-67gld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58b052721bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:29.515448 containerd[1471]: 2025-07-11 00:22:29.474 [INFO][4938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-67gld" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:29.515448 containerd[1471]: 2025-07-11 00:22:29.474 [INFO][4938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58b052721bb ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-67gld" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:29.515448 containerd[1471]: 2025-07-11 00:22:29.483 [INFO][4938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-67gld" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:29.515448 containerd[1471]: 2025-07-11 00:22:29.485 [INFO][4938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-67gld" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0", GenerateName:"calico-apiserver-6fbf9d5d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1285ec7c-afc4-4f44-b914-280d299b3f6e", ResourceVersion:"1150", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fbf9d5d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08", Pod:"calico-apiserver-6fbf9d5d8f-67gld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58b052721bb", MAC:"be:86:37:1c:5a:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:29.515448 containerd[1471]: 2025-07-11 00:22:29.502 [INFO][4938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08" Namespace="calico-apiserver" Pod="calico-apiserver-6fbf9d5d8f-67gld" WorkloadEndpoint="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:29.572110 containerd[1471]: time="2025-07-11T00:22:29.569101704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:22:29.572110 containerd[1471]: time="2025-07-11T00:22:29.569219622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:22:29.572110 containerd[1471]: time="2025-07-11T00:22:29.569239951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:29.573377 containerd[1471]: time="2025-07-11T00:22:29.572931244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:22:29.625786 containerd[1471]: time="2025-07-11T00:22:29.625141919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ddcd977db-n58hn,Uid:86e94e6b-c0ad-463c-abf5-b6899adb9e4c,Namespace:calico-system,Attempt:1,} returns sandbox id \"7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce\"" Jul 11 00:22:29.653495 systemd[1]: Started cri-containerd-03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08.scope - libcontainer container 03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08. Jul 11 00:22:29.693222 systemd-resolved[1343]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:22:29.778329 containerd[1471]: time="2025-07-11T00:22:29.778198871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6fbf9d5d8f-67gld,Uid:1285ec7c-afc4-4f44-b914-280d299b3f6e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08\"" Jul 11 00:22:29.785903 kubelet[2568]: E0711 00:22:29.785848 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:29.790643 kubelet[2568]: E0711 00:22:29.790590 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:29.875690 containerd[1471]: time="2025-07-11T00:22:29.874751954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:29.879982 kubelet[2568]: I0711 00:22:29.879781 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-hsp4s" podStartSLOduration=109.879719365 podStartE2EDuration="1m49.879719365s" podCreationTimestamp="2025-07-11 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:22:29.861735725 +0000 UTC m=+115.501946263" watchObservedRunningTime="2025-07-11 00:22:29.879719365 +0000 UTC m=+115.519929893" Jul 11 00:22:29.880482 containerd[1471]: time="2025-07-11T00:22:29.880400607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 11 00:22:29.883439 containerd[1471]: time="2025-07-11T00:22:29.882967673Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:29.889613 containerd[1471]: time="2025-07-11T00:22:29.888974136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 9.229311313s" Jul 11 00:22:29.889613 containerd[1471]: time="2025-07-11T00:22:29.889024693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 11 00:22:29.892227 containerd[1471]: time="2025-07-11T00:22:29.892147651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:29.892438 containerd[1471]: time="2025-07-11T00:22:29.892398443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:22:29.905130 containerd[1471]: time="2025-07-11T00:22:29.905035693Z" level=info msg="CreateContainer within sandbox \"64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:22:29.913670 kubelet[2568]: I0711 00:22:29.913586 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zhf8q" podStartSLOduration=109.913547643 podStartE2EDuration="1m49.913547643s" podCreationTimestamp="2025-07-11 00:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:22:29.88040734 +0000 UTC m=+115.520618359" watchObservedRunningTime="2025-07-11 00:22:29.913547643 +0000 UTC m=+115.553758171" Jul 11 00:22:29.951800 containerd[1471]: time="2025-07-11T00:22:29.951470170Z" level=info msg="CreateContainer within sandbox \"64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"9335346605c182834d9978c74a9e5a4e171cb2bf5b3420448f1f30830ce225e8\"" Jul 11 00:22:29.952747 containerd[1471]: time="2025-07-11T00:22:29.952697193Z" level=info msg="StartContainer for \"9335346605c182834d9978c74a9e5a4e171cb2bf5b3420448f1f30830ce225e8\"" Jul 11 00:22:29.999414 systemd[1]: Started cri-containerd-9335346605c182834d9978c74a9e5a4e171cb2bf5b3420448f1f30830ce225e8.scope - libcontainer container 9335346605c182834d9978c74a9e5a4e171cb2bf5b3420448f1f30830ce225e8. Jul 11 00:22:30.066847 containerd[1471]: time="2025-07-11T00:22:30.066637292Z" level=info msg="StartContainer for \"9335346605c182834d9978c74a9e5a4e171cb2bf5b3420448f1f30830ce225e8\" returns successfully" Jul 11 00:22:30.796038 kubelet[2568]: E0711 00:22:30.795679 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:30.796038 kubelet[2568]: E0711 00:22:30.795972 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:30.865928 systemd-networkd[1394]: cali58b052721bb: Gained IPv6LL Jul 11 00:22:30.929377 systemd-networkd[1394]: cali81190d58625: Gained IPv6LL Jul 11 00:22:30.993520 systemd-networkd[1394]: cali2091a8857c9: Gained IPv6LL Jul 11 00:22:31.388258 containerd[1471]: time="2025-07-11T00:22:31.388185901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:31.390264 containerd[1471]: time="2025-07-11T00:22:31.390167117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 11 00:22:31.392609 containerd[1471]: time="2025-07-11T00:22:31.392549503Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:31.399035 containerd[1471]: time="2025-07-11T00:22:31.398975596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:31.400035 containerd[1471]: time="2025-07-11T00:22:31.399972656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.507523656s" Jul 11 00:22:31.400035 containerd[1471]: time="2025-07-11T00:22:31.400031429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 11 00:22:31.401597 containerd[1471]: time="2025-07-11T00:22:31.401558370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:22:31.408375 containerd[1471]: time="2025-07-11T00:22:31.408316702Z" level=info msg="CreateContainer within sandbox \"b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:22:31.441482 containerd[1471]: time="2025-07-11T00:22:31.441395302Z" level=info msg="CreateContainer within sandbox \"b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2b594f75eb3e52db852fb5e685b4ad9bcfa78dba5bc39946fe5e7f710bda6ff1\"" Jul 11 00:22:31.442257 containerd[1471]: time="2025-07-11T00:22:31.442201324Z" level=info msg="StartContainer for \"2b594f75eb3e52db852fb5e685b4ad9bcfa78dba5bc39946fe5e7f710bda6ff1\"" Jul 11 00:22:31.480430 systemd[1]: Started cri-containerd-2b594f75eb3e52db852fb5e685b4ad9bcfa78dba5bc39946fe5e7f710bda6ff1.scope - libcontainer container 2b594f75eb3e52db852fb5e685b4ad9bcfa78dba5bc39946fe5e7f710bda6ff1. Jul 11 00:22:31.529825 containerd[1471]: time="2025-07-11T00:22:31.529751454Z" level=info msg="StartContainer for \"2b594f75eb3e52db852fb5e685b4ad9bcfa78dba5bc39946fe5e7f710bda6ff1\" returns successfully" Jul 11 00:22:31.800600 kubelet[2568]: E0711 00:22:31.800558 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:31.801129 kubelet[2568]: E0711 00:22:31.800837 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:31.839467 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:38824.service - OpenSSH per-connection server daemon (10.0.0.1:38824). Jul 11 00:22:31.913934 kubelet[2568]: I0711 00:22:31.913844 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-nxg64" podStartSLOduration=56.682573222 podStartE2EDuration="1m5.91382245s" podCreationTimestamp="2025-07-11 00:21:26 +0000 UTC" firstStartedPulling="2025-07-11 00:22:20.659420265 +0000 UTC m=+106.299630793" lastFinishedPulling="2025-07-11 00:22:29.890669493 +0000 UTC m=+115.530880021" observedRunningTime="2025-07-11 00:22:30.894229551 +0000 UTC m=+116.534440079" watchObservedRunningTime="2025-07-11 00:22:31.91382245 +0000 UTC m=+117.554032978" Jul 11 00:22:31.931022 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 38824 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:31.934341 sshd[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:31.941814 systemd-logind[1453]: New session 14 of user core. Jul 11 00:22:31.951295 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:22:32.118216 sshd[5345]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:32.124738 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:22:32.125159 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:38824.service: Deactivated successfully. Jul 11 00:22:32.127999 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:22:32.129203 systemd-logind[1453]: Removed session 14. Jul 11 00:22:34.196300 containerd[1471]: time="2025-07-11T00:22:34.196206152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:34.197766 containerd[1471]: time="2025-07-11T00:22:34.197676451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 11 00:22:34.199659 containerd[1471]: time="2025-07-11T00:22:34.199601754Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:34.203325 containerd[1471]: time="2025-07-11T00:22:34.203259911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:34.205087 containerd[1471]: time="2025-07-11T00:22:34.205017843Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.803406201s" Jul 11 00:22:34.205186 containerd[1471]: time="2025-07-11T00:22:34.205100602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 11 00:22:34.206978 containerd[1471]: time="2025-07-11T00:22:34.206716761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:22:34.215368 containerd[1471]: time="2025-07-11T00:22:34.215276506Z" level=info msg="CreateContainer within sandbox \"3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:22:34.363648 containerd[1471]: time="2025-07-11T00:22:34.363413770Z" level=info msg="CreateContainer within sandbox \"3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5a63669fedb78ae6dfe5c1a5f248d63026ceb7c0a322a984a56ef0805fdb9e09\"" Jul 11 00:22:34.364809 containerd[1471]: time="2025-07-11T00:22:34.364635891Z" level=info msg="StartContainer for \"5a63669fedb78ae6dfe5c1a5f248d63026ceb7c0a322a984a56ef0805fdb9e09\"" Jul 11 00:22:34.442967 systemd[1]: run-containerd-runc-k8s.io-5a63669fedb78ae6dfe5c1a5f248d63026ceb7c0a322a984a56ef0805fdb9e09-runc.jjLpXd.mount: Deactivated successfully. Jul 11 00:22:34.450289 systemd[1]: Started cri-containerd-5a63669fedb78ae6dfe5c1a5f248d63026ceb7c0a322a984a56ef0805fdb9e09.scope - libcontainer container 5a63669fedb78ae6dfe5c1a5f248d63026ceb7c0a322a984a56ef0805fdb9e09. Jul 11 00:22:34.787108 containerd[1471]: time="2025-07-11T00:22:34.786955556Z" level=info msg="StartContainer for \"5a63669fedb78ae6dfe5c1a5f248d63026ceb7c0a322a984a56ef0805fdb9e09\" returns successfully" Jul 11 00:22:34.788881 containerd[1471]: time="2025-07-11T00:22:34.788814432Z" level=info msg="StopPodSandbox for \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\"" Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.024 [WARNING][5421] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0", GenerateName:"calico-kube-controllers-7ddcd977db-", Namespace:"calico-system", SelfLink:"", UID:"86e94e6b-c0ad-463c-abf5-b6899adb9e4c", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ddcd977db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce", Pod:"calico-kube-controllers-7ddcd977db-n58hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2091a8857c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.025 [INFO][5421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.025 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" iface="eth0" netns="" Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.025 [INFO][5421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.025 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.048 [INFO][5432] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" HandleID="k8s-pod-network.6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.048 [INFO][5432] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.048 [INFO][5432] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.147 [WARNING][5432] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" HandleID="k8s-pod-network.6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.147 [INFO][5432] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" HandleID="k8s-pod-network.6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.150 [INFO][5432] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:35.158227 containerd[1471]: 2025-07-11 00:22:35.154 [INFO][5421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:35.158227 containerd[1471]: time="2025-07-11T00:22:35.158062940Z" level=info msg="TearDown network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\" successfully" Jul 11 00:22:35.158227 containerd[1471]: time="2025-07-11T00:22:35.158135269Z" level=info msg="StopPodSandbox for \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\" returns successfully" Jul 11 00:22:35.159590 containerd[1471]: time="2025-07-11T00:22:35.159560941Z" level=info msg="RemovePodSandbox for \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\"" Jul 11 00:22:35.165404 containerd[1471]: time="2025-07-11T00:22:35.165318583Z" level=info msg="Forcibly stopping sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\"" Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.218 [WARNING][5450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0", GenerateName:"calico-kube-controllers-7ddcd977db-", Namespace:"calico-system", SelfLink:"", UID:"86e94e6b-c0ad-463c-abf5-b6899adb9e4c", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ddcd977db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce", Pod:"calico-kube-controllers-7ddcd977db-n58hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2091a8857c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.218 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.218 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" iface="eth0" netns="" Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.218 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.218 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.255 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" HandleID="k8s-pod-network.6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.255 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.255 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.263 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" HandleID="k8s-pod-network.6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.263 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" HandleID="k8s-pod-network.6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Workload="localhost-k8s-calico--kube--controllers--7ddcd977db--n58hn-eth0" Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.266 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:35.274174 containerd[1471]: 2025-07-11 00:22:35.270 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680" Jul 11 00:22:35.276112 containerd[1471]: time="2025-07-11T00:22:35.275354302Z" level=info msg="TearDown network for sandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\" successfully" Jul 11 00:22:35.431124 containerd[1471]: time="2025-07-11T00:22:35.430837197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:22:35.431124 containerd[1471]: time="2025-07-11T00:22:35.430950946Z" level=info msg="RemovePodSandbox \"6da8a16f734a9bb2bb4d8630d2d5659d5fb9b4c223d9f501763c4cfa0d201680\" returns successfully" Jul 11 00:22:35.432067 containerd[1471]: time="2025-07-11T00:22:35.431983912Z" level=info msg="StopPodSandbox for \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\"" Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.483 [WARNING][5475] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" WorkloadEndpoint="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.483 [INFO][5475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.483 [INFO][5475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" iface="eth0" netns="" Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.483 [INFO][5475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.483 [INFO][5475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.512 [INFO][5483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" HandleID="k8s-pod-network.0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Workload="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.512 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.512 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.522 [WARNING][5483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" HandleID="k8s-pod-network.0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Workload="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.522 [INFO][5483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" HandleID="k8s-pod-network.0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Workload="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.526 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:35.533429 containerd[1471]: 2025-07-11 00:22:35.529 [INFO][5475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:35.535533 containerd[1471]: time="2025-07-11T00:22:35.533887854Z" level=info msg="TearDown network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\" successfully" Jul 11 00:22:35.535533 containerd[1471]: time="2025-07-11T00:22:35.533926248Z" level=info msg="StopPodSandbox for \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\" returns successfully" Jul 11 00:22:35.535533 containerd[1471]: time="2025-07-11T00:22:35.534667103Z" level=info msg="RemovePodSandbox for \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\"" Jul 11 00:22:35.535533 containerd[1471]: time="2025-07-11T00:22:35.534724863Z" level=info msg="Forcibly stopping sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\"" Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.582 [WARNING][5501] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" WorkloadEndpoint="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.583 [INFO][5501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.583 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" iface="eth0" netns="" Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.583 [INFO][5501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.583 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.761 [INFO][5510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" HandleID="k8s-pod-network.0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Workload="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.761 [INFO][5510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.761 [INFO][5510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.767 [WARNING][5510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" HandleID="k8s-pod-network.0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Workload="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.767 [INFO][5510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" HandleID="k8s-pod-network.0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Workload="localhost-k8s-whisker--6994867c74--8bhtg-eth0" Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.769 [INFO][5510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:35.775880 containerd[1471]: 2025-07-11 00:22:35.772 [INFO][5501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9" Jul 11 00:22:35.776338 containerd[1471]: time="2025-07-11T00:22:35.775923259Z" level=info msg="TearDown network for sandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\" successfully" Jul 11 00:22:35.873992 containerd[1471]: time="2025-07-11T00:22:35.873753176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:22:35.873992 containerd[1471]: time="2025-07-11T00:22:35.873861555Z" level=info msg="RemovePodSandbox \"0718f01ad1d050bc3b2ac6c49c594e096e590943a1d445b498a45597415c25e9\" returns successfully" Jul 11 00:22:35.874690 containerd[1471]: time="2025-07-11T00:22:35.874646675Z" level=info msg="StopPodSandbox for \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\"" Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.919 [WARNING][5527] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"85a64a17-b3e6-422f-9756-cb2f80a1643b", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1", Pod:"coredns-674b8bbfcf-hsp4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aae2711af3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.919 [INFO][5527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.919 [INFO][5527] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" iface="eth0" netns="" Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.919 [INFO][5527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.919 [INFO][5527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.947 [INFO][5535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" HandleID="k8s-pod-network.93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.947 [INFO][5535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.947 [INFO][5535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.955 [WARNING][5535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" HandleID="k8s-pod-network.93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.955 [INFO][5535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" HandleID="k8s-pod-network.93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.957 [INFO][5535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:35.963577 containerd[1471]: 2025-07-11 00:22:35.960 [INFO][5527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:35.964197 containerd[1471]: time="2025-07-11T00:22:35.963632996Z" level=info msg="TearDown network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\" successfully" Jul 11 00:22:35.964197 containerd[1471]: time="2025-07-11T00:22:35.963668284Z" level=info msg="StopPodSandbox for \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\" returns successfully" Jul 11 00:22:35.964612 containerd[1471]: time="2025-07-11T00:22:35.964561081Z" level=info msg="RemovePodSandbox for \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\"" Jul 11 00:22:35.964687 containerd[1471]: time="2025-07-11T00:22:35.964619774Z" level=info msg="Forcibly stopping sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\"" Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.008 [WARNING][5553] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"85a64a17-b3e6-422f-9756-cb2f80a1643b", ResourceVersion:"1206", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9fee53cbceeb301b49f1f13f21ca1050ea6b7d94f0a3cf30edbe40c8389081c1", Pod:"coredns-674b8bbfcf-hsp4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aae2711af3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.009 [INFO][5553] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.009 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" iface="eth0" netns="" Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.009 [INFO][5553] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.009 [INFO][5553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.031 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" HandleID="k8s-pod-network.93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.032 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.032 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.042 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" HandleID="k8s-pod-network.93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.042 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" HandleID="k8s-pod-network.93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Workload="localhost-k8s-coredns--674b8bbfcf--hsp4s-eth0" Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.044 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:36.051552 containerd[1471]: 2025-07-11 00:22:36.047 [INFO][5553] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c" Jul 11 00:22:36.052683 containerd[1471]: time="2025-07-11T00:22:36.052619920Z" level=info msg="TearDown network for sandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\" successfully" Jul 11 00:22:36.067711 containerd[1471]: time="2025-07-11T00:22:36.067459417Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:22:36.067711 containerd[1471]: time="2025-07-11T00:22:36.067552597Z" level=info msg="RemovePodSandbox \"93a8e879bd95fb82e1e00757e83f0464164be325429dca4108664c300c88c43c\" returns successfully" Jul 11 00:22:36.068844 containerd[1471]: time="2025-07-11T00:22:36.068809242Z" level=info msg="StopPodSandbox for \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\"" Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.155 [WARNING][5580] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0", GenerateName:"calico-apiserver-6fbf9d5d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1285ec7c-afc4-4f44-b914-280d299b3f6e", ResourceVersion:"1191", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fbf9d5d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08", Pod:"calico-apiserver-6fbf9d5d8f-67gld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58b052721bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.155 [INFO][5580] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.155 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" iface="eth0" netns="" Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.155 [INFO][5580] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.155 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.189 [INFO][5590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" HandleID="k8s-pod-network.8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.189 [INFO][5590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.189 [INFO][5590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.196 [WARNING][5590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" HandleID="k8s-pod-network.8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.196 [INFO][5590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" HandleID="k8s-pod-network.8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.199 [INFO][5590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:36.206798 containerd[1471]: 2025-07-11 00:22:36.203 [INFO][5580] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:36.207419 containerd[1471]: time="2025-07-11T00:22:36.206856080Z" level=info msg="TearDown network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\" successfully" Jul 11 00:22:36.207419 containerd[1471]: time="2025-07-11T00:22:36.206889204Z" level=info msg="StopPodSandbox for \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\" returns successfully" Jul 11 00:22:36.207651 containerd[1471]: time="2025-07-11T00:22:36.207612244Z" level=info msg="RemovePodSandbox for \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\"" Jul 11 00:22:36.207651 containerd[1471]: time="2025-07-11T00:22:36.207650568Z" level=info msg="Forcibly stopping sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\"" Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.297 [WARNING][5607] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0", GenerateName:"calico-apiserver-6fbf9d5d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"1285ec7c-afc4-4f44-b914-280d299b3f6e", ResourceVersion:"1191", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fbf9d5d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08", Pod:"calico-apiserver-6fbf9d5d8f-67gld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58b052721bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.297 [INFO][5607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.297 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" iface="eth0" netns="" Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.297 [INFO][5607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.297 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.319 [INFO][5616] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" HandleID="k8s-pod-network.8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.320 [INFO][5616] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.320 [INFO][5616] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.669 [WARNING][5616] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" HandleID="k8s-pod-network.8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.669 [INFO][5616] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" HandleID="k8s-pod-network.8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--67gld-eth0" Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.672 [INFO][5616] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:36.678693 containerd[1471]: 2025-07-11 00:22:36.675 [INFO][5607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba" Jul 11 00:22:36.759340 containerd[1471]: time="2025-07-11T00:22:36.678756133Z" level=info msg="TearDown network for sandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\" successfully" Jul 11 00:22:37.112277 containerd[1471]: time="2025-07-11T00:22:37.112183335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:22:37.112277 containerd[1471]: time="2025-07-11T00:22:37.112311762Z" level=info msg="RemovePodSandbox \"8899218296e64f3841a3381b5d1e228201a3b39cd1afde227cd7c891959700ba\" returns successfully" Jul 11 00:22:37.113322 containerd[1471]: time="2025-07-11T00:22:37.113062023Z" level=info msg="StopPodSandbox for \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\"" Jul 11 00:22:37.139149 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:56502.service - OpenSSH per-connection server daemon (10.0.0.1:56502). Jul 11 00:22:37.188863 sshd[5650]: Accepted publickey for core from 10.0.0.1 port 56502 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:37.191706 sshd[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:37.197318 systemd-logind[1453]: New session 15 of user core. Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.161 [WARNING][5645] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4fb87ad9-16fb-494a-87eb-605af4502d26", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0", Pod:"coredns-674b8bbfcf-zhf8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53a1315ac0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.161 [INFO][5645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.161 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" iface="eth0" netns="" Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.161 [INFO][5645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.161 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.183 [INFO][5655] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" HandleID="k8s-pod-network.55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.183 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.183 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.191 [WARNING][5655] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" HandleID="k8s-pod-network.55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.191 [INFO][5655] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" HandleID="k8s-pod-network.55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.192 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:37.202984 containerd[1471]: 2025-07-11 00:22:37.200 [INFO][5645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:37.203535 containerd[1471]: time="2025-07-11T00:22:37.203023452Z" level=info msg="TearDown network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\" successfully" Jul 11 00:22:37.203535 containerd[1471]: time="2025-07-11T00:22:37.203049812Z" level=info msg="StopPodSandbox for \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\" returns successfully" Jul 11 00:22:37.203659 containerd[1471]: time="2025-07-11T00:22:37.203628725Z" level=info msg="RemovePodSandbox for \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\"" Jul 11 00:22:37.203659 containerd[1471]: time="2025-07-11T00:22:37.203656859Z" level=info msg="Forcibly stopping sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\"" Jul 11 00:22:37.204322 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.247 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4fb87ad9-16fb-494a-87eb-605af4502d26", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 20, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"badc5fd1b2492085cf9cea6a2c1bfc70056f0c74a876f8a14a4cf04b9268ebe0", Pod:"coredns-674b8bbfcf-zhf8q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia53a1315ac0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.248 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.248 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" iface="eth0" netns="" Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.248 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.248 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.279 [INFO][5682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" HandleID="k8s-pod-network.55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.279 [INFO][5682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.279 [INFO][5682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.286 [WARNING][5682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" HandleID="k8s-pod-network.55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.286 [INFO][5682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" HandleID="k8s-pod-network.55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Workload="localhost-k8s-coredns--674b8bbfcf--zhf8q-eth0" Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.288 [INFO][5682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:37.298646 containerd[1471]: 2025-07-11 00:22:37.293 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1" Jul 11 00:22:37.298646 containerd[1471]: time="2025-07-11T00:22:37.298479380Z" level=info msg="TearDown network for sandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\" successfully" Jul 11 00:22:37.486201 containerd[1471]: time="2025-07-11T00:22:37.486128844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:22:37.486572 containerd[1471]: time="2025-07-11T00:22:37.486225280Z" level=info msg="RemovePodSandbox \"55a514fc6463adc269a5f43f395c95275a7fb63ca90693e7768fc4f5ff2b4de1\" returns successfully" Jul 11 00:22:37.487889 containerd[1471]: time="2025-07-11T00:22:37.487199031Z" level=info msg="StopPodSandbox for \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\"" Jul 11 00:22:37.593267 sshd[5650]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:37.606205 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:56502.service: Deactivated successfully. Jul 11 00:22:37.610019 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:22:37.617603 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.546 [WARNING][5711] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0", GenerateName:"calico-apiserver-6fbf9d5d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"11afdd52-9586-41ad-b277-069a0e6d90ba", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fbf9d5d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086", Pod:"calico-apiserver-6fbf9d5d8f-zclpr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81190d58625", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.548 [INFO][5711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.548 [INFO][5711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" iface="eth0" netns="" Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.548 [INFO][5711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.548 [INFO][5711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.595 [INFO][5720] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" HandleID="k8s-pod-network.2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.595 [INFO][5720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.596 [INFO][5720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.603 [WARNING][5720] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" HandleID="k8s-pod-network.2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.603 [INFO][5720] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" HandleID="k8s-pod-network.2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.607 [INFO][5720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:37.618928 containerd[1471]: 2025-07-11 00:22:37.614 [INFO][5711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:37.619580 containerd[1471]: time="2025-07-11T00:22:37.618976786Z" level=info msg="TearDown network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\" successfully" Jul 11 00:22:37.619580 containerd[1471]: time="2025-07-11T00:22:37.619004539Z" level=info msg="StopPodSandbox for \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\" returns successfully" Jul 11 00:22:37.620184 containerd[1471]: time="2025-07-11T00:22:37.620155722Z" level=info msg="RemovePodSandbox for \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\"" Jul 11 00:22:37.620242 containerd[1471]: time="2025-07-11T00:22:37.620187764Z" level=info msg="Forcibly stopping sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\"" Jul 11 00:22:37.621495 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:56516.service - OpenSSH per-connection server daemon (10.0.0.1:56516). Jul 11 00:22:37.622814 systemd-logind[1453]: Removed session 15. Jul 11 00:22:37.672009 sshd[5731]: Accepted publickey for core from 10.0.0.1 port 56516 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:37.674806 sshd[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:37.679488 systemd-logind[1453]: New session 16 of user core. Jul 11 00:22:37.684225 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.918 [WARNING][5742] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0", GenerateName:"calico-apiserver-6fbf9d5d8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"11afdd52-9586-41ad-b277-069a0e6d90ba", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6fbf9d5d8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086", Pod:"calico-apiserver-6fbf9d5d8f-zclpr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali81190d58625", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.918 [INFO][5742] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.918 [INFO][5742] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" iface="eth0" netns="" Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.918 [INFO][5742] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.918 [INFO][5742] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.947 [INFO][5759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" HandleID="k8s-pod-network.2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.947 [INFO][5759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.947 [INFO][5759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.954 [WARNING][5759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" HandleID="k8s-pod-network.2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.954 [INFO][5759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" HandleID="k8s-pod-network.2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Workload="localhost-k8s-calico--apiserver--6fbf9d5d8f--zclpr-eth0" Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.955 [INFO][5759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:37.962138 containerd[1471]: 2025-07-11 00:22:37.959 [INFO][5742] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df" Jul 11 00:22:37.962942 containerd[1471]: time="2025-07-11T00:22:37.962194920Z" level=info msg="TearDown network for sandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\" successfully" Jul 11 00:22:38.128704 containerd[1471]: time="2025-07-11T00:22:38.128565022Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:22:38.128704 containerd[1471]: time="2025-07-11T00:22:38.128679512Z" level=info msg="RemovePodSandbox \"2982cf476edc513d8828a73d494e48c72eac21723b026bcb5975f29b5f7c92df\" returns successfully" Jul 11 00:22:38.129365 containerd[1471]: time="2025-07-11T00:22:38.129337396Z" level=info msg="StopPodSandbox for \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\"" Jul 11 00:22:38.178807 containerd[1471]: time="2025-07-11T00:22:38.178718215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:38.182999 containerd[1471]: time="2025-07-11T00:22:38.182908617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 11 00:22:38.185177 containerd[1471]: time="2025-07-11T00:22:38.185140204Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:38.192346 containerd[1471]: time="2025-07-11T00:22:38.192133131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:38.193906 containerd[1471]: time="2025-07-11T00:22:38.193787379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.98702003s" Jul 11 00:22:38.193906 containerd[1471]: time="2025-07-11T00:22:38.193849748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:22:38.195652 sshd[5731]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:38.203115 containerd[1471]: time="2025-07-11T00:22:38.201808681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:22:38.219204 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:56518.service - OpenSSH per-connection server daemon (10.0.0.1:56518). Jul 11 00:22:38.220849 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:56516.service: Deactivated successfully. Jul 11 00:22:38.226053 containerd[1471]: time="2025-07-11T00:22:38.225990885Z" level=info msg="CreateContainer within sandbox \"e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:22:38.227850 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:22:38.235749 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:22:38.239786 systemd-logind[1453]: Removed session 16. Jul 11 00:22:38.259871 containerd[1471]: time="2025-07-11T00:22:38.259781209Z" level=info msg="CreateContainer within sandbox \"e7d3d6d17ffd614204197e440af77409b5cc7b3422eb6a77cc9148e1261c7086\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c153fcd8ea3b323c936e9159f3bae1e5c710c8289edbb831b09eb44890fdfa10\"" Jul 11 00:22:38.262804 containerd[1471]: time="2025-07-11T00:22:38.261321819Z" level=info msg="StartContainer for \"c153fcd8ea3b323c936e9159f3bae1e5c710c8289edbb831b09eb44890fdfa10\"" Jul 11 00:22:38.281387 sshd[5796]: Accepted publickey for core from 10.0.0.1 port 56518 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:38.288348 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.190 [WARNING][5777] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--nxg64-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b897360e-69c8-4b60-abf3-671418db329a", ResourceVersion:"1232", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86", Pod:"goldmane-768f4c5c69-nxg64", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0d7854ba5a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.191 [INFO][5777] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.191 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" iface="eth0" netns="" Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.191 [INFO][5777] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.191 [INFO][5777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.257 [INFO][5791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" HandleID="k8s-pod-network.9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.257 [INFO][5791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.258 [INFO][5791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.268 [WARNING][5791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" HandleID="k8s-pod-network.9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.268 [INFO][5791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" HandleID="k8s-pod-network.9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.271 [INFO][5791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:38.293154 containerd[1471]: 2025-07-11 00:22:38.278 [INFO][5777] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:38.293699 containerd[1471]: time="2025-07-11T00:22:38.293225428Z" level=info msg="TearDown network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\" successfully" Jul 11 00:22:38.293699 containerd[1471]: time="2025-07-11T00:22:38.293264543Z" level=info msg="StopPodSandbox for \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\" returns successfully" Jul 11 00:22:38.294462 containerd[1471]: time="2025-07-11T00:22:38.294420995Z" level=info msg="RemovePodSandbox for \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\"" Jul 11 00:22:38.294517 containerd[1471]: time="2025-07-11T00:22:38.294466964Z" level=info msg="Forcibly stopping sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\"" Jul 11 00:22:38.305348 systemd[1]: Started cri-containerd-c153fcd8ea3b323c936e9159f3bae1e5c710c8289edbb831b09eb44890fdfa10.scope - libcontainer container c153fcd8ea3b323c936e9159f3bae1e5c710c8289edbb831b09eb44890fdfa10. Jul 11 00:22:38.308023 systemd-logind[1453]: New session 17 of user core. Jul 11 00:22:38.310616 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:22:38.367804 containerd[1471]: time="2025-07-11T00:22:38.367742063Z" level=info msg="StartContainer for \"c153fcd8ea3b323c936e9159f3bae1e5c710c8289edbb831b09eb44890fdfa10\" returns successfully" Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.344 [WARNING][5829] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--nxg64-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b897360e-69c8-4b60-abf3-671418db329a", ResourceVersion:"1232", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64f78c83efd0607cb560f095ecf7fa864335551168e016909291b691efb78a86", Pod:"goldmane-768f4c5c69-nxg64", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0d7854ba5a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.344 [INFO][5829] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.344 [INFO][5829] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" iface="eth0" netns="" Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.344 [INFO][5829] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.344 [INFO][5829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.377 [INFO][5845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" HandleID="k8s-pod-network.9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.378 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.378 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.393 [WARNING][5845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" HandleID="k8s-pod-network.9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.393 [INFO][5845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" HandleID="k8s-pod-network.9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Workload="localhost-k8s-goldmane--768f4c5c69--nxg64-eth0" Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.396 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:38.405766 containerd[1471]: 2025-07-11 00:22:38.400 [INFO][5829] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733" Jul 11 00:22:38.405766 containerd[1471]: time="2025-07-11T00:22:38.404348478Z" level=info msg="TearDown network for sandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\" successfully" Jul 11 00:22:38.412070 containerd[1471]: time="2025-07-11T00:22:38.412027222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:22:38.412311 containerd[1471]: time="2025-07-11T00:22:38.412254979Z" level=info msg="RemovePodSandbox \"9b899e2d7466194a7df19cdb2f969282ac0436e1b8a5127afd729de5c035e733\" returns successfully" Jul 11 00:22:38.413029 containerd[1471]: time="2025-07-11T00:22:38.412975093Z" level=info msg="StopPodSandbox for \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\"" Jul 11 00:22:38.513677 sshd[5796]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:38.518999 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:56518.service: Deactivated successfully. Jul 11 00:22:38.521956 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:22:38.525066 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.469 [WARNING][5884] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--24rnq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef0f4240-50c1-431a-b911-54802b65a3ca", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3", Pod:"csi-node-driver-24rnq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5080cd91dc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.469 [INFO][5884] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.469 [INFO][5884] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" iface="eth0" netns="" Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.470 [INFO][5884] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.470 [INFO][5884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.507 [INFO][5895] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" HandleID="k8s-pod-network.00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.508 [INFO][5895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.508 [INFO][5895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.515 [WARNING][5895] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" HandleID="k8s-pod-network.00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.515 [INFO][5895] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" HandleID="k8s-pod-network.00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.518 [INFO][5895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:38.526594 containerd[1471]: 2025-07-11 00:22:38.522 [INFO][5884] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:38.526989 containerd[1471]: time="2025-07-11T00:22:38.526663973Z" level=info msg="TearDown network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\" successfully" Jul 11 00:22:38.526989 containerd[1471]: time="2025-07-11T00:22:38.526716946Z" level=info msg="StopPodSandbox for \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\" returns successfully" Jul 11 00:22:38.527393 containerd[1471]: time="2025-07-11T00:22:38.527363918Z" level=info msg="RemovePodSandbox for \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\"" Jul 11 00:22:38.527431 containerd[1471]: time="2025-07-11T00:22:38.527405418Z" level=info msg="Forcibly stopping sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\"" Jul 11 00:22:38.527829 systemd-logind[1453]: Removed session 17. Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.574 [WARNING][5915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--24rnq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef0f4240-50c1-431a-b911-54802b65a3ca", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 21, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3", Pod:"csi-node-driver-24rnq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5080cd91dc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.574 [INFO][5915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.574 [INFO][5915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" iface="eth0" netns="" Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.574 [INFO][5915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.574 [INFO][5915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.596 [INFO][5924] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" HandleID="k8s-pod-network.00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.596 [INFO][5924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.596 [INFO][5924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.603 [WARNING][5924] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" HandleID="k8s-pod-network.00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.603 [INFO][5924] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" HandleID="k8s-pod-network.00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Workload="localhost-k8s-csi--node--driver--24rnq-eth0" Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.605 [INFO][5924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:22:38.611630 containerd[1471]: 2025-07-11 00:22:38.608 [INFO][5915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d" Jul 11 00:22:38.612588 containerd[1471]: time="2025-07-11T00:22:38.611685042Z" level=info msg="TearDown network for sandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\" successfully" Jul 11 00:22:38.616153 containerd[1471]: time="2025-07-11T00:22:38.616120234Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:22:38.616234 containerd[1471]: time="2025-07-11T00:22:38.616194757Z" level=info msg="RemovePodSandbox \"00b144570b51542ae737c49832add910c6709e8d6bec220042ca8907a3f2204d\" returns successfully" Jul 11 00:22:39.599949 kubelet[2568]: I0711 00:22:39.599251 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-zclpr" podStartSLOduration=77.853053207 podStartE2EDuration="1m26.59922218s" podCreationTimestamp="2025-07-11 00:21:13 +0000 UTC" firstStartedPulling="2025-07-11 00:22:29.454515547 +0000 UTC m=+115.094726086" lastFinishedPulling="2025-07-11 00:22:38.200684531 +0000 UTC m=+123.840895059" observedRunningTime="2025-07-11 00:22:38.920084547 +0000 UTC m=+124.560295075" watchObservedRunningTime="2025-07-11 00:22:39.59922218 +0000 UTC m=+125.239432708" Jul 11 00:22:42.551265 containerd[1471]: time="2025-07-11T00:22:42.550541686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:42.552679 containerd[1471]: time="2025-07-11T00:22:42.552633241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 11 00:22:42.561408 containerd[1471]: time="2025-07-11T00:22:42.561308031Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:42.562062 containerd[1471]: time="2025-07-11T00:22:42.561995762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.360142776s" Jul 11 00:22:42.562062 containerd[1471]: time="2025-07-11T00:22:42.562045226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 11 00:22:42.562973 containerd[1471]: time="2025-07-11T00:22:42.562927680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:42.563982 containerd[1471]: time="2025-07-11T00:22:42.563882323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:22:42.596821 containerd[1471]: time="2025-07-11T00:22:42.596769878Z" level=info msg="CreateContainer within sandbox \"7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:22:43.254131 containerd[1471]: time="2025-07-11T00:22:43.254027183Z" level=info msg="CreateContainer within sandbox \"7d55a4f5e7c91ca2ee324b5fc6805dca2b92d93d3a738a904f5cac44b77444ce\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5df444587282bc10c6f9b4ee6b19c3c2d2b70d57e1155332c80ee0be457c34c9\"" Jul 11 00:22:43.255381 containerd[1471]: time="2025-07-11T00:22:43.254944873Z" level=info msg="StartContainer for \"5df444587282bc10c6f9b4ee6b19c3c2d2b70d57e1155332c80ee0be457c34c9\"" Jul 11 00:22:43.292372 containerd[1471]: time="2025-07-11T00:22:43.292291262Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:43.293749 containerd[1471]: time="2025-07-11T00:22:43.293650380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:22:43.313696 containerd[1471]: time="2025-07-11T00:22:43.313615914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 749.693875ms" Jul 11 00:22:43.313696 containerd[1471]: time="2025-07-11T00:22:43.313675358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 11 00:22:43.317420 containerd[1471]: time="2025-07-11T00:22:43.317353967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:22:43.326784 containerd[1471]: time="2025-07-11T00:22:43.325973709Z" level=info msg="CreateContainer within sandbox \"03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:22:43.354262 containerd[1471]: time="2025-07-11T00:22:43.354206127Z" level=info msg="CreateContainer within sandbox \"03e828f5965b42ffd215aea310a49e6471bb391612302c468723212979446e08\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"49e81048aaa997ea2fb6ca9f1c43b5a5096be2b4d68823b6767b3848e3671e50\"" Jul 11 00:22:43.356003 containerd[1471]: time="2025-07-11T00:22:43.354955856Z" level=info msg="StartContainer for \"49e81048aaa997ea2fb6ca9f1c43b5a5096be2b4d68823b6767b3848e3671e50\"" Jul 11 00:22:43.359623 systemd[1]: Started cri-containerd-5df444587282bc10c6f9b4ee6b19c3c2d2b70d57e1155332c80ee0be457c34c9.scope - libcontainer container 5df444587282bc10c6f9b4ee6b19c3c2d2b70d57e1155332c80ee0be457c34c9. Jul 11 00:22:43.429530 systemd[1]: Started cri-containerd-49e81048aaa997ea2fb6ca9f1c43b5a5096be2b4d68823b6767b3848e3671e50.scope - libcontainer container 49e81048aaa997ea2fb6ca9f1c43b5a5096be2b4d68823b6767b3848e3671e50. Jul 11 00:22:43.532496 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:56532.service - OpenSSH per-connection server daemon (10.0.0.1:56532). Jul 11 00:22:44.257100 sshd[6029]: Accepted publickey for core from 10.0.0.1 port 56532 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:44.258230 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:44.265242 systemd-logind[1453]: New session 18 of user core. Jul 11 00:22:44.273292 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:22:44.281353 containerd[1471]: time="2025-07-11T00:22:44.280896986Z" level=info msg="StartContainer for \"5df444587282bc10c6f9b4ee6b19c3c2d2b70d57e1155332c80ee0be457c34c9\" returns successfully" Jul 11 00:22:44.281353 containerd[1471]: time="2025-07-11T00:22:44.280954547Z" level=info msg="StartContainer for \"49e81048aaa997ea2fb6ca9f1c43b5a5096be2b4d68823b6767b3848e3671e50\" returns successfully" Jul 11 00:22:44.361154 systemd[1]: run-containerd-runc-k8s.io-5df444587282bc10c6f9b4ee6b19c3c2d2b70d57e1155332c80ee0be457c34c9-runc.H55FSD.mount: Deactivated successfully. Jul 11 00:22:44.503993 kubelet[2568]: I0711 00:22:44.503785 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7ddcd977db-n58hn" podStartSLOduration=64.546146863 podStartE2EDuration="1m17.482168662s" podCreationTimestamp="2025-07-11 00:21:27 +0000 UTC" firstStartedPulling="2025-07-11 00:22:29.627298926 +0000 UTC m=+115.267509454" lastFinishedPulling="2025-07-11 00:22:42.563320725 +0000 UTC m=+128.203531253" observedRunningTime="2025-07-11 00:22:44.475177248 +0000 UTC m=+130.115387796" watchObservedRunningTime="2025-07-11 00:22:44.482168662 +0000 UTC m=+130.122379190" Jul 11 00:22:44.920492 sshd[6029]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:44.927226 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:56532.service: Deactivated successfully. Jul 11 00:22:44.930655 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:22:44.931933 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:22:44.933366 systemd-logind[1453]: Removed session 18. Jul 11 00:22:45.553752 kubelet[2568]: I0711 00:22:45.553666 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6fbf9d5d8f-67gld" podStartSLOduration=79.017377975 podStartE2EDuration="1m32.55363253s" podCreationTimestamp="2025-07-11 00:21:13 +0000 UTC" firstStartedPulling="2025-07-11 00:22:29.780244142 +0000 UTC m=+115.420454670" lastFinishedPulling="2025-07-11 00:22:43.316498697 +0000 UTC m=+128.956709225" observedRunningTime="2025-07-11 00:22:45.55211099 +0000 UTC m=+131.192321548" watchObservedRunningTime="2025-07-11 00:22:45.55363253 +0000 UTC m=+131.193843058" Jul 11 00:22:47.634667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2749761932.mount: Deactivated successfully. Jul 11 00:22:49.112993 containerd[1471]: time="2025-07-11T00:22:49.112899064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:49.296741 containerd[1471]: time="2025-07-11T00:22:49.296653450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 11 00:22:49.393221 containerd[1471]: time="2025-07-11T00:22:49.392873798Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:49.492954 containerd[1471]: time="2025-07-11T00:22:49.492872798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:49.493847 containerd[1471]: time="2025-07-11T00:22:49.493795736Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 6.176385061s" Jul 11 00:22:49.493847 containerd[1471]: time="2025-07-11T00:22:49.493838709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 11 00:22:49.495193 containerd[1471]: time="2025-07-11T00:22:49.495159891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:22:49.537108 containerd[1471]: time="2025-07-11T00:22:49.537020231Z" level=info msg="CreateContainer within sandbox \"b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:22:49.784110 containerd[1471]: time="2025-07-11T00:22:49.784001861Z" level=info msg="CreateContainer within sandbox \"b8b970ff8d3f18832a9336ae78b613d1e076f7e29ff755c60c7c9f35c6b26843\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"df615990c12ecd28bb705df9e077eca27c87e332dfcbd28e83095cfe552dbb03\"" Jul 11 00:22:49.784909 containerd[1471]: time="2025-07-11T00:22:49.784863041Z" level=info msg="StartContainer for \"df615990c12ecd28bb705df9e077eca27c87e332dfcbd28e83095cfe552dbb03\"" Jul 11 00:22:49.821707 systemd[1]: run-containerd-runc-k8s.io-df615990c12ecd28bb705df9e077eca27c87e332dfcbd28e83095cfe552dbb03-runc.qak7R5.mount: Deactivated successfully. Jul 11 00:22:49.833396 systemd[1]: Started cri-containerd-df615990c12ecd28bb705df9e077eca27c87e332dfcbd28e83095cfe552dbb03.scope - libcontainer container df615990c12ecd28bb705df9e077eca27c87e332dfcbd28e83095cfe552dbb03. Jul 11 00:22:49.919536 containerd[1471]: time="2025-07-11T00:22:49.919054935Z" level=info msg="StartContainer for \"df615990c12ecd28bb705df9e077eca27c87e332dfcbd28e83095cfe552dbb03\" returns successfully" Jul 11 00:22:49.944000 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:40228.service - OpenSSH per-connection server daemon (10.0.0.1:40228). Jul 11 00:22:50.043444 sshd[6176]: Accepted publickey for core from 10.0.0.1 port 40228 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:50.045895 sshd[6176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:50.052031 systemd-logind[1453]: New session 19 of user core. Jul 11 00:22:50.058355 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:22:50.523850 kubelet[2568]: I0711 00:22:50.523748 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d7b457878-lc4l4" podStartSLOduration=3.918158591 podStartE2EDuration="32.523726801s" podCreationTimestamp="2025-07-11 00:22:18 +0000 UTC" firstStartedPulling="2025-07-11 00:22:20.889310742 +0000 UTC m=+106.529521270" lastFinishedPulling="2025-07-11 00:22:49.494878922 +0000 UTC m=+135.135089480" observedRunningTime="2025-07-11 00:22:50.518546245 +0000 UTC m=+136.158756773" watchObservedRunningTime="2025-07-11 00:22:50.523726801 +0000 UTC m=+136.163937329" Jul 11 00:22:51.093592 sshd[6176]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:51.102424 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:40228.service: Deactivated successfully. Jul 11 00:22:51.107394 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:22:51.108723 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:22:51.110422 systemd-logind[1453]: Removed session 19. Jul 11 00:22:51.569200 containerd[1471]: time="2025-07-11T00:22:51.568132107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:51.571143 containerd[1471]: time="2025-07-11T00:22:51.571012375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 11 00:22:51.573764 containerd[1471]: time="2025-07-11T00:22:51.573707408Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:51.577983 containerd[1471]: time="2025-07-11T00:22:51.577921322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:22:51.579199 containerd[1471]: time="2025-07-11T00:22:51.579127964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.083820039s" Jul 11 00:22:51.579199 containerd[1471]: time="2025-07-11T00:22:51.579195704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 11 00:22:51.593941 containerd[1471]: time="2025-07-11T00:22:51.593886931Z" level=info msg="CreateContainer within sandbox \"3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:22:51.643657 containerd[1471]: time="2025-07-11T00:22:51.643407879Z" level=info msg="CreateContainer within sandbox \"3c470b7db4be7b812073cd1bc6c483b8602af9a07bba1aae2a5b9399a82bc1a3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1a1ef8660b55e29be95b49159fda2a520d26abd312a8174cecf731ef620abada\"" Jul 11 00:22:51.644737 containerd[1471]: time="2025-07-11T00:22:51.644662983Z" level=info msg="StartContainer for \"1a1ef8660b55e29be95b49159fda2a520d26abd312a8174cecf731ef620abada\"" Jul 11 00:22:51.697356 systemd[1]: Started cri-containerd-1a1ef8660b55e29be95b49159fda2a520d26abd312a8174cecf731ef620abada.scope - libcontainer container 1a1ef8660b55e29be95b49159fda2a520d26abd312a8174cecf731ef620abada. Jul 11 00:22:51.743241 containerd[1471]: time="2025-07-11T00:22:51.743166768Z" level=info msg="StartContainer for \"1a1ef8660b55e29be95b49159fda2a520d26abd312a8174cecf731ef620abada\" returns successfully" Jul 11 00:22:57.165153 kernel: hrtimer: interrupt took 3893063 ns Jul 11 00:22:57.192961 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:39978.service - OpenSSH per-connection server daemon (10.0.0.1:39978). Jul 11 00:22:57.355470 kubelet[2568]: E0711 00:22:57.353287 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:22:57.409762 kubelet[2568]: I0711 00:22:57.409617 2568 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-24rnq" podStartSLOduration=62.027106345 podStartE2EDuration="1m30.409589443s" podCreationTimestamp="2025-07-11 00:21:27 +0000 UTC" firstStartedPulling="2025-07-11 00:22:23.19790545 +0000 UTC m=+108.838115979" lastFinishedPulling="2025-07-11 00:22:51.580388549 +0000 UTC m=+137.220599077" observedRunningTime="2025-07-11 00:22:57.409297414 +0000 UTC m=+143.049507942" watchObservedRunningTime="2025-07-11 00:22:57.409589443 +0000 UTC m=+143.049799971" Jul 11 00:22:57.762498 sshd[6240]: Accepted publickey for core from 10.0.0.1 port 39978 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:22:57.765247 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:22:57.775007 systemd-logind[1453]: New session 20 of user core. Jul 11 00:22:57.781375 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:22:57.888903 kubelet[2568]: I0711 00:22:57.888827 2568 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:22:57.910031 kubelet[2568]: I0711 00:22:57.909972 2568 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:22:58.596273 sshd[6240]: pam_unix(sshd:session): session closed for user core Jul 11 00:22:58.602688 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:39978.service: Deactivated successfully. Jul 11 00:22:58.607013 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:22:58.609101 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:22:58.610565 systemd-logind[1453]: Removed session 20. Jul 11 00:23:01.501233 kubelet[2568]: E0711 00:23:01.501182 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:03.614353 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:39980.service - OpenSSH per-connection server daemon (10.0.0.1:39980). Jul 11 00:23:03.751174 sshd[6320]: Accepted publickey for core from 10.0.0.1 port 39980 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:03.753271 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:03.760833 systemd-logind[1453]: New session 21 of user core. Jul 11 00:23:03.770348 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:23:04.051001 sshd[6320]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:04.070393 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:39980.service: Deactivated successfully. Jul 11 00:23:04.072880 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:23:04.073649 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:23:04.074868 systemd-logind[1453]: Removed session 21. Jul 11 00:23:05.501585 kubelet[2568]: E0711 00:23:05.501525 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:08.501634 kubelet[2568]: E0711 00:23:08.501548 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:09.063657 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:60748.service - OpenSSH per-connection server daemon (10.0.0.1:60748). Jul 11 00:23:09.134032 sshd[6334]: Accepted publickey for core from 10.0.0.1 port 60748 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:09.135945 sshd[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:09.140279 systemd-logind[1453]: New session 22 of user core. Jul 11 00:23:09.147333 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:23:09.293032 sshd[6334]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:09.301049 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:60748.service: Deactivated successfully. Jul 11 00:23:09.304925 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:23:09.305892 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:23:09.307060 systemd-logind[1453]: Removed session 22. Jul 11 00:23:11.501565 kubelet[2568]: E0711 00:23:11.501499 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:14.313988 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:60752.service - OpenSSH per-connection server daemon (10.0.0.1:60752). Jul 11 00:23:14.583450 sshd[6375]: Accepted publickey for core from 10.0.0.1 port 60752 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:14.586136 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:14.610645 systemd-logind[1453]: New session 23 of user core. Jul 11 00:23:14.624279 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:23:14.970345 sshd[6375]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:14.983215 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:60752.service: Deactivated successfully. Jul 11 00:23:14.985700 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:23:14.987620 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:23:14.999477 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:60762.service - OpenSSH per-connection server daemon (10.0.0.1:60762). Jul 11 00:23:15.000799 systemd-logind[1453]: Removed session 23. Jul 11 00:23:15.033496 sshd[6390]: Accepted publickey for core from 10.0.0.1 port 60762 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:15.036124 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:15.041876 systemd-logind[1453]: New session 24 of user core. Jul 11 00:23:15.051435 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 00:23:15.813324 sshd[6390]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:15.827181 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:60762.service: Deactivated successfully. Jul 11 00:23:15.829736 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 00:23:15.832509 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Jul 11 00:23:15.834317 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:60776.service - OpenSSH per-connection server daemon (10.0.0.1:60776). Jul 11 00:23:15.835406 systemd-logind[1453]: Removed session 24. Jul 11 00:23:15.884347 sshd[6423]: Accepted publickey for core from 10.0.0.1 port 60776 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:15.886304 sshd[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:15.894366 systemd-logind[1453]: New session 25 of user core. Jul 11 00:23:15.903379 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 00:23:17.060554 sshd[6423]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:17.075571 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:60776.service: Deactivated successfully. Jul 11 00:23:17.081501 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 00:23:17.083196 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. Jul 11 00:23:17.100619 systemd[1]: Started sshd@25-10.0.0.89:22-10.0.0.1:43910.service - OpenSSH per-connection server daemon (10.0.0.1:43910). Jul 11 00:23:17.106430 systemd-logind[1453]: Removed session 25. Jul 11 00:23:17.168187 sshd[6446]: Accepted publickey for core from 10.0.0.1 port 43910 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:17.170327 sshd[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:17.177802 systemd-logind[1453]: New session 26 of user core. Jul 11 00:23:17.185579 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 00:23:17.903520 sshd[6446]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:17.913693 systemd[1]: sshd@25-10.0.0.89:22-10.0.0.1:43910.service: Deactivated successfully. Jul 11 00:23:17.916394 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 00:23:17.919154 systemd-logind[1453]: Session 26 logged out. Waiting for processes to exit. Jul 11 00:23:17.926581 systemd[1]: Started sshd@26-10.0.0.89:22-10.0.0.1:43922.service - OpenSSH per-connection server daemon (10.0.0.1:43922). Jul 11 00:23:17.928403 systemd-logind[1453]: Removed session 26. Jul 11 00:23:18.002595 sshd[6460]: Accepted publickey for core from 10.0.0.1 port 43922 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:18.005015 sshd[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:18.012551 systemd-logind[1453]: New session 27 of user core. Jul 11 00:23:18.020320 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 11 00:23:18.169210 sshd[6460]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:18.175562 systemd[1]: sshd@26-10.0.0.89:22-10.0.0.1:43922.service: Deactivated successfully. Jul 11 00:23:18.178448 systemd[1]: session-27.scope: Deactivated successfully. Jul 11 00:23:18.179860 systemd-logind[1453]: Session 27 logged out. Waiting for processes to exit. Jul 11 00:23:18.182476 systemd-logind[1453]: Removed session 27. Jul 11 00:23:23.185032 systemd[1]: Started sshd@27-10.0.0.89:22-10.0.0.1:43932.service - OpenSSH per-connection server daemon (10.0.0.1:43932). Jul 11 00:23:23.258746 sshd[6476]: Accepted publickey for core from 10.0.0.1 port 43932 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:23.261646 sshd[6476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:23.268620 systemd-logind[1453]: New session 28 of user core. Jul 11 00:23:23.277414 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 11 00:23:23.981178 sshd[6476]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:23.986158 systemd[1]: sshd@27-10.0.0.89:22-10.0.0.1:43932.service: Deactivated successfully. Jul 11 00:23:23.988835 systemd[1]: session-28.scope: Deactivated successfully. Jul 11 00:23:23.990636 systemd-logind[1453]: Session 28 logged out. Waiting for processes to exit. Jul 11 00:23:23.992104 systemd-logind[1453]: Removed session 28. Jul 11 00:23:28.995731 systemd[1]: Started sshd@28-10.0.0.89:22-10.0.0.1:59740.service - OpenSSH per-connection server daemon (10.0.0.1:59740). Jul 11 00:23:29.075122 sshd[6491]: Accepted publickey for core from 10.0.0.1 port 59740 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:29.076175 sshd[6491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:29.086677 systemd-logind[1453]: New session 29 of user core. Jul 11 00:23:29.093317 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 11 00:23:30.147894 sshd[6491]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:30.154360 systemd[1]: sshd@28-10.0.0.89:22-10.0.0.1:59740.service: Deactivated successfully. Jul 11 00:23:30.158054 systemd[1]: session-29.scope: Deactivated successfully. Jul 11 00:23:30.159139 systemd-logind[1453]: Session 29 logged out. Waiting for processes to exit. Jul 11 00:23:30.161232 systemd-logind[1453]: Removed session 29. Jul 11 00:23:34.953327 systemd[1]: Started sshd@29-10.0.0.89:22-10.0.0.1:59742.service - OpenSSH per-connection server daemon (10.0.0.1:59742). Jul 11 00:23:35.007551 sshd[6534]: Accepted publickey for core from 10.0.0.1 port 59742 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:35.009804 sshd[6534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:35.014748 systemd-logind[1453]: New session 30 of user core. Jul 11 00:23:35.022513 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 11 00:23:35.329217 sshd[6534]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:35.334878 systemd[1]: sshd@29-10.0.0.89:22-10.0.0.1:59742.service: Deactivated successfully. Jul 11 00:23:35.339644 systemd[1]: session-30.scope: Deactivated successfully. Jul 11 00:23:35.340856 systemd-logind[1453]: Session 30 logged out. Waiting for processes to exit. Jul 11 00:23:35.342110 systemd-logind[1453]: Removed session 30. Jul 11 00:23:36.506385 kubelet[2568]: E0711 00:23:36.506313 2568 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:23:43.037789 systemd[1]: Started sshd@30-10.0.0.89:22-10.0.0.1:50516.service - OpenSSH per-connection server daemon (10.0.0.1:50516). Jul 11 00:23:43.267193 sshd[6557]: Accepted publickey for core from 10.0.0.1 port 50516 ssh2: RSA SHA256:FCL55Ve/xvuldIyR2b7eMaaOT6EnxAosit+pdkZfXh0 Jul 11 00:23:43.269400 sshd[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:23:43.274227 systemd-logind[1453]: New session 31 of user core. Jul 11 00:23:43.284267 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 11 00:23:44.610635 sshd[6557]: pam_unix(sshd:session): session closed for user core Jul 11 00:23:44.615530 systemd[1]: sshd@30-10.0.0.89:22-10.0.0.1:50516.service: Deactivated successfully. Jul 11 00:23:44.618481 systemd[1]: session-31.scope: Deactivated successfully. Jul 11 00:23:44.619756 systemd-logind[1453]: Session 31 logged out. Waiting for processes to exit. Jul 11 00:23:44.621875 systemd-logind[1453]: Removed session 31.