Feb 13 19:48:38.888238 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 19:48:38.888260 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:48:38.888271 kernel: BIOS-provided physical RAM map: Feb 13 19:48:38.888277 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:48:38.888283 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:48:38.888289 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:48:38.888297 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:48:38.888303 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:48:38.888309 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 13 19:48:38.888315 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 13 19:48:38.888324 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 13 19:48:38.888330 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Feb 13 19:48:38.888336 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Feb 13 19:48:38.888343 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Feb 13 19:48:38.888350 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 13 19:48:38.888357 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:48:38.888367 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 13 19:48:38.888373 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 13 19:48:38.888380 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:48:38.888386 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:48:38.888393 kernel: NX (Execute Disable) protection: active Feb 13 19:48:38.888400 kernel: APIC: Static calls initialized Feb 13 19:48:38.888406 kernel: efi: EFI v2.7 by EDK II Feb 13 19:48:38.888413 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Feb 13 19:48:38.888420 kernel: SMBIOS 2.8 present. Feb 13 19:48:38.888427 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Feb 13 19:48:38.888433 kernel: Hypervisor detected: KVM Feb 13 19:48:38.888442 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:48:38.888449 kernel: kvm-clock: using sched offset of 3908922247 cycles Feb 13 19:48:38.888456 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:48:38.888463 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:48:38.888470 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:48:38.888477 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:48:38.888484 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 13 19:48:38.888491 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:48:38.888498 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:48:38.888508 kernel: Using GB pages for direct mapping Feb 13 19:48:38.888514 kernel: Secure boot disabled Feb 13 19:48:38.888521 kernel: ACPI: Early table checksum verification disabled Feb 13 19:48:38.888528 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 19:48:38.888539 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:48:38.888546 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:48:38.888553 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:48:38.888563 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 19:48:38.888570 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:48:38.888577 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:48:38.888584 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:48:38.888591 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:48:38.888598 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:48:38.888605 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 19:48:38.888615 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 19:48:38.888622 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 19:48:38.888629 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 19:48:38.888636 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 19:48:38.888643 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 19:48:38.888650 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 19:48:38.888657 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 19:48:38.888664 kernel: No NUMA configuration found Feb 13 19:48:38.888671 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 13 19:48:38.888678 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 13 19:48:38.888688 kernel: Zone ranges: Feb 13 19:48:38.888695 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:48:38.888702 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 13 19:48:38.888709 kernel: Normal empty Feb 13 19:48:38.888717 kernel: Movable zone start for each node Feb 13 19:48:38.888724 kernel: Early memory node ranges Feb 13 19:48:38.888731 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:48:38.888738 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 19:48:38.888745 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 19:48:38.888754 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 13 19:48:38.888761 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 13 19:48:38.888768 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 13 19:48:38.888776 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 13 19:48:38.888783 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:48:38.888790 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:48:38.888797 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 19:48:38.888804 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:48:38.888811 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 13 19:48:38.888821 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 13 19:48:38.888828 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 13 19:48:38.888835 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:48:38.888842 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:48:38.888849 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:48:38.888856 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:48:38.888863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:48:38.888870 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:48:38.888878 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:48:38.888885 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:48:38.888894 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:48:38.888901 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:48:38.888908 kernel: TSC deadline timer available Feb 13 19:48:38.888915 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:48:38.888922 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:48:38.888929 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:48:38.888936 kernel: kvm-guest: setup PV sched yield Feb 13 19:48:38.888944 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 19:48:38.888951 kernel: Booting paravirtualized kernel on KVM Feb 13 19:48:38.888960 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:48:38.888967 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:48:38.888975 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:48:38.888982 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:48:38.888988 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:48:38.888995 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:48:38.889003 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:48:38.889011 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:48:38.889021 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:48:38.889028 kernel: random: crng init done Feb 13 19:48:38.889041 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:48:38.889063 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:48:38.889071 kernel: Fallback order for Node 0: 0 Feb 13 19:48:38.889078 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 13 19:48:38.889085 kernel: Policy zone: DMA32 Feb 13 19:48:38.889092 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:48:38.889100 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 171124K reserved, 0K cma-reserved) Feb 13 19:48:38.889110 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:48:38.889117 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 19:48:38.889124 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:48:38.889131 kernel: Dynamic Preempt: voluntary Feb 13 19:48:38.889147 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:48:38.889157 kernel: rcu: RCU event tracing is enabled. Feb 13 19:48:38.889165 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:48:38.889172 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:48:38.889180 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:48:38.889187 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:48:38.889195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:48:38.889202 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:48:38.889213 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:48:38.889220 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:48:38.889227 kernel: Console: colour dummy device 80x25 Feb 13 19:48:38.889235 kernel: printk: console [ttyS0] enabled Feb 13 19:48:38.889242 kernel: ACPI: Core revision 20230628 Feb 13 19:48:38.889252 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:48:38.889259 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:48:38.889267 kernel: x2apic enabled Feb 13 19:48:38.889274 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:48:38.889282 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:48:38.889289 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:48:38.889296 kernel: kvm-guest: setup PV IPIs Feb 13 19:48:38.889304 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:48:38.889311 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:48:38.889321 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:48:38.889329 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:48:38.889336 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:48:38.889343 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:48:38.889351 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:48:38.889358 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:48:38.889366 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:48:38.889373 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:48:38.889380 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:48:38.889390 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:48:38.889397 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:48:38.889405 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:48:38.889412 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:48:38.889420 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:48:38.889428 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:48:38.889435 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:48:38.889443 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:48:38.889453 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:48:38.889460 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:48:38.889467 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:48:38.889475 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:48:38.889482 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:48:38.889490 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:48:38.889497 kernel: landlock: Up and running. Feb 13 19:48:38.889504 kernel: SELinux: Initializing. Feb 13 19:48:38.889512 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:48:38.889521 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:48:38.889529 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:48:38.889536 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:48:38.889544 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:48:38.889552 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:48:38.889559 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:48:38.889566 kernel: ... version: 0 Feb 13 19:48:38.889574 kernel: ... bit width: 48 Feb 13 19:48:38.889581 kernel: ... generic registers: 6 Feb 13 19:48:38.889591 kernel: ... value mask: 0000ffffffffffff Feb 13 19:48:38.889598 kernel: ... max period: 00007fffffffffff Feb 13 19:48:38.889606 kernel: ... fixed-purpose events: 0 Feb 13 19:48:38.889613 kernel: ... event mask: 000000000000003f Feb 13 19:48:38.889620 kernel: signal: max sigframe size: 1776 Feb 13 19:48:38.889628 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:48:38.889635 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:48:38.889642 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:48:38.889650 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:48:38.889660 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:48:38.889667 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:48:38.889674 kernel: smpboot: Max logical packages: 1 Feb 13 19:48:38.889682 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:48:38.889689 kernel: devtmpfs: initialized Feb 13 19:48:38.889696 kernel: x86/mm: Memory block size: 128MB Feb 13 19:48:38.889704 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 19:48:38.889711 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 19:48:38.889719 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 13 19:48:38.889728 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 19:48:38.889736 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 19:48:38.889743 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:48:38.889751 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:48:38.889758 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:48:38.889766 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:48:38.889773 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:48:38.889781 kernel: audit: type=2000 audit(1739476118.939:1): state=initialized audit_enabled=0 res=1 Feb 13 19:48:38.889788 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:48:38.889797 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:48:38.889805 kernel: cpuidle: using governor menu Feb 13 19:48:38.889812 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:48:38.889819 kernel: dca service started, version 1.12.1 Feb 13 19:48:38.889827 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:48:38.889834 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:48:38.889842 kernel: PCI: Using configuration type 1 for base access Feb 13 19:48:38.889849 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:48:38.889857 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:48:38.889866 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:48:38.889874 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:48:38.889881 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:48:38.889888 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:48:38.889896 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:48:38.889903 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:48:38.889911 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:48:38.889918 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:48:38.889925 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:48:38.889935 kernel: ACPI: Interpreter enabled Feb 13 19:48:38.889942 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:48:38.889950 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:48:38.889957 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:48:38.889964 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:48:38.889972 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:48:38.889980 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:48:38.890173 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:48:38.890308 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:48:38.890431 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:48:38.890441 kernel: PCI host bridge to bus 0000:00 Feb 13 19:48:38.890565 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:48:38.890676 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:48:38.890786 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:48:38.890895 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:48:38.891009 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:48:38.891159 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Feb 13 19:48:38.891271 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:48:38.891407 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:48:38.891544 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:48:38.891665 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 19:48:38.891789 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 19:48:38.891908 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 19:48:38.892027 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 19:48:38.892169 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:48:38.892299 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:48:38.892421 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 19:48:38.892540 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 19:48:38.892664 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 13 19:48:38.892792 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:48:38.892914 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 19:48:38.893033 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 19:48:38.893182 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 13 19:48:38.893311 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:48:38.893432 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 19:48:38.893556 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 19:48:38.893675 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 13 19:48:38.893794 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 19:48:38.893920 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:48:38.894069 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:48:38.894204 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:48:38.894324 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 19:48:38.894446 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 19:48:38.894573 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:48:38.894693 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 19:48:38.894703 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:48:38.894711 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:48:38.894718 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:48:38.894726 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:48:38.894737 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:48:38.894744 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:48:38.894752 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:48:38.894759 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:48:38.894767 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:48:38.894774 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:48:38.894781 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:48:38.894789 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:48:38.894796 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:48:38.894806 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:48:38.894813 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:48:38.894821 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:48:38.894828 kernel: iommu: Default domain type: Translated Feb 13 19:48:38.894836 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:48:38.894844 kernel: efivars: Registered efivars operations Feb 13 19:48:38.894851 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:48:38.894859 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:48:38.894866 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 19:48:38.894876 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 13 19:48:38.894883 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 13 19:48:38.894890 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 13 19:48:38.895011 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:48:38.895152 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:48:38.895271 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:48:38.895281 kernel: vgaarb: loaded Feb 13 19:48:38.895289 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:48:38.895297 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:48:38.895309 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:48:38.895316 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:48:38.895324 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:48:38.895331 kernel: pnp: PnP ACPI init Feb 13 19:48:38.895470 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:48:38.895481 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:48:38.895489 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:48:38.895496 kernel: NET: Registered PF_INET protocol family Feb 13 19:48:38.895507 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:48:38.895514 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:48:38.895522 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:48:38.895530 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:48:38.895537 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:48:38.895545 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:48:38.895552 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:48:38.895559 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:48:38.895567 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:48:38.895577 kernel: NET: Registered PF_XDP protocol family Feb 13 19:48:38.895699 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 19:48:38.895818 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 19:48:38.895928 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:48:38.896046 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:48:38.896224 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:48:38.896333 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:48:38.896441 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:48:38.896554 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Feb 13 19:48:38.896564 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:48:38.896571 kernel: Initialise system trusted keyrings Feb 13 19:48:38.896581 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:48:38.896589 kernel: Key type asymmetric registered Feb 13 19:48:38.896598 kernel: Asymmetric key parser 'x509' registered Feb 13 19:48:38.896606 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:48:38.896614 kernel: io scheduler mq-deadline registered Feb 13 19:48:38.896621 kernel: io scheduler kyber registered Feb 13 19:48:38.896632 kernel: io scheduler bfq registered Feb 13 19:48:38.896639 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:48:38.896647 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:48:38.896655 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:48:38.896663 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:48:38.896670 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:48:38.896678 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:48:38.896685 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:48:38.896693 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:48:38.896703 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:48:38.896824 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:48:38.896836 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:48:38.896947 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:48:38.897078 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:48:38 UTC (1739476118) Feb 13 19:48:38.897191 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:48:38.897201 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:48:38.897208 kernel: efifb: probing for efifb Feb 13 19:48:38.897219 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Feb 13 19:48:38.897227 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Feb 13 19:48:38.897234 kernel: efifb: scrolling: redraw Feb 13 19:48:38.897242 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Feb 13 19:48:38.897249 kernel: Console: switching to colour frame buffer device 100x37 Feb 13 19:48:38.897275 kernel: fb0: EFI VGA frame buffer device Feb 13 19:48:38.897285 kernel: pstore: Using crash dump compression: deflate Feb 13 19:48:38.897293 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:48:38.897300 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:48:38.897310 kernel: Segment Routing with IPv6 Feb 13 19:48:38.897318 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:48:38.897325 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:48:38.897333 kernel: Key type dns_resolver registered Feb 13 19:48:38.897341 kernel: IPI shorthand broadcast: enabled Feb 13 19:48:38.897349 kernel: sched_clock: Marking stable (545003469, 112154973)->(699086907, -41928465) Feb 13 19:48:38.897356 kernel: registered taskstats version 1 Feb 13 19:48:38.897364 kernel: Loading compiled-in X.509 certificates Feb 13 19:48:38.897372 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 19:48:38.897382 kernel: Key type .fscrypt registered Feb 13 19:48:38.897390 kernel: Key type fscrypt-provisioning registered Feb 13 19:48:38.897402 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:48:38.897410 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:48:38.897418 kernel: ima: No architecture policies found Feb 13 19:48:38.897425 kernel: clk: Disabling unused clocks Feb 13 19:48:38.897433 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 19:48:38.897441 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:48:38.897451 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 19:48:38.897459 kernel: Run /init as init process Feb 13 19:48:38.897467 kernel: with arguments: Feb 13 19:48:38.897474 kernel: /init Feb 13 19:48:38.897482 kernel: with environment: Feb 13 19:48:38.897490 kernel: HOME=/ Feb 13 19:48:38.897497 kernel: TERM=linux Feb 13 19:48:38.897506 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:48:38.897516 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:48:38.897528 systemd[1]: Detected virtualization kvm. Feb 13 19:48:38.897536 systemd[1]: Detected architecture x86-64. Feb 13 19:48:38.897544 systemd[1]: Running in initrd. Feb 13 19:48:38.897555 systemd[1]: No hostname configured, using default hostname. Feb 13 19:48:38.897565 systemd[1]: Hostname set to . Feb 13 19:48:38.897574 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:48:38.897584 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:48:38.897593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:38.897603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:38.897612 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:48:38.897630 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:48:38.897645 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:48:38.897669 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:48:38.897686 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:48:38.897695 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:48:38.897704 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:38.897712 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:38.897723 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:48:38.897731 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:48:38.897742 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:48:38.897750 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:48:38.897758 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:48:38.897767 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:48:38.897775 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:48:38.897783 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:48:38.897792 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:38.897800 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:38.897808 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:38.897819 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:48:38.897827 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:48:38.897835 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:48:38.897844 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:48:38.897852 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:48:38.897860 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:48:38.897875 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:48:38.897884 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:38.897895 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:48:38.897903 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:38.897911 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:48:38.897920 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:48:38.897929 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:48:38.897940 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:48:38.897949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:38.897975 systemd-journald[191]: Collecting audit messages is disabled. Feb 13 19:48:38.897994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:38.898005 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:38.898013 systemd-journald[191]: Journal started Feb 13 19:48:38.898031 systemd-journald[191]: Runtime Journal (/run/log/journal/1b1c6089dee1457b9c8cd52f6dd0c30f) is 6.0M, max 48.3M, 42.2M free. Feb 13 19:48:38.878042 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 19:48:38.901463 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:48:38.905099 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:48:38.907541 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:48:38.909077 kernel: Bridge firewalling registered Feb 13 19:48:38.909001 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 19:48:38.910471 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:38.911687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:48:38.913381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:38.916563 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:48:38.918639 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:38.925297 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:38.928806 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:48:38.933929 dracut-cmdline[224]: dracut-dracut-053 Feb 13 19:48:38.936603 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 19:48:38.966472 systemd-resolved[233]: Positive Trust Anchors: Feb 13 19:48:38.966488 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:48:38.966517 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:48:38.976952 systemd-resolved[233]: Defaulting to hostname 'linux'. Feb 13 19:48:38.978773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:48:38.979914 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:39.019080 kernel: SCSI subsystem initialized Feb 13 19:48:39.029075 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:48:39.039080 kernel: iscsi: registered transport (tcp) Feb 13 19:48:39.059421 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:48:39.059442 kernel: QLogic iSCSI HBA Driver Feb 13 19:48:39.102162 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:48:39.115169 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:48:39.138098 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:48:39.138127 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:48:39.139071 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:48:39.179076 kernel: raid6: avx2x4 gen() 30616 MB/s Feb 13 19:48:39.196075 kernel: raid6: avx2x2 gen() 31041 MB/s Feb 13 19:48:39.213129 kernel: raid6: avx2x1 gen() 26137 MB/s Feb 13 19:48:39.213146 kernel: raid6: using algorithm avx2x2 gen() 31041 MB/s Feb 13 19:48:39.231133 kernel: raid6: .... xor() 20027 MB/s, rmw enabled Feb 13 19:48:39.231152 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:48:39.251073 kernel: xor: automatically using best checksumming function avx Feb 13 19:48:39.398080 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:48:39.410588 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:48:39.426206 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:39.439695 systemd-udevd[412]: Using default interface naming scheme 'v255'. Feb 13 19:48:39.445170 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:39.452180 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:48:39.464842 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Feb 13 19:48:39.495505 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:48:39.505397 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:48:39.566647 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:39.573175 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:48:39.586036 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:48:39.589261 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:48:39.592009 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:39.594416 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:48:39.599075 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:48:39.640647 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:48:39.640842 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:48:39.640859 kernel: libata version 3.00 loaded. Feb 13 19:48:39.640874 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:48:39.649423 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:48:39.649442 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:48:39.649629 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:48:39.649644 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:48:39.649814 kernel: AES CTR mode by8 optimization enabled Feb 13 19:48:39.649827 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:48:39.649847 kernel: GPT:9289727 != 19775487 Feb 13 19:48:39.649860 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:48:39.649874 kernel: GPT:9289727 != 19775487 Feb 13 19:48:39.649886 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:48:39.649900 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:48:39.649915 kernel: scsi host0: ahci Feb 13 19:48:39.650136 kernel: scsi host1: ahci Feb 13 19:48:39.650309 kernel: scsi host2: ahci Feb 13 19:48:39.650501 kernel: scsi host3: ahci Feb 13 19:48:39.650685 kernel: scsi host4: ahci Feb 13 19:48:39.650865 kernel: scsi host5: ahci Feb 13 19:48:39.651068 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 19:48:39.651085 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 19:48:39.651099 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 19:48:39.651126 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 19:48:39.651139 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 19:48:39.651151 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 19:48:39.603230 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:48:39.615659 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:48:39.623396 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:48:39.623457 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:39.624859 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:39.666768 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (461) Feb 13 19:48:39.666785 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Feb 13 19:48:39.626033 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:39.627904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:39.629090 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:39.642298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:39.656065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:39.656223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:39.678918 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:48:39.692252 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:48:39.693483 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:48:39.700242 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:48:39.705748 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:48:39.718179 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:48:39.719911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:39.725998 disk-uuid[555]: Primary Header is updated. Feb 13 19:48:39.725998 disk-uuid[555]: Secondary Entries is updated. Feb 13 19:48:39.725998 disk-uuid[555]: Secondary Header is updated. Feb 13 19:48:39.730080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:48:39.734067 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:48:39.735747 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:39.742206 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:39.755329 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:39.958772 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:48:39.958832 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:48:39.958843 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:48:39.959566 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:48:39.959632 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:48:39.961079 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:48:39.961097 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:48:39.962081 kernel: ata3.00: applying bridge limits Feb 13 19:48:39.963077 kernel: ata3.00: configured for UDMA/100 Feb 13 19:48:39.963097 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:48:40.010608 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:48:40.022629 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:48:40.022644 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:48:40.736081 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:48:40.736254 disk-uuid[557]: The operation has completed successfully. Feb 13 19:48:40.766918 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:48:40.767059 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:48:40.794249 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:48:40.799424 sh[596]: Success Feb 13 19:48:40.811079 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:48:40.844348 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:48:40.855612 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:48:40.860522 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:48:40.872471 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 19:48:40.872502 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:48:40.872518 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:48:40.874437 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:48:40.874453 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:48:40.879036 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:48:40.880100 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:48:40.885215 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:48:40.886870 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:48:40.895265 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:48:40.895303 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:48:40.895314 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:48:40.899161 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:48:40.908488 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:48:40.910719 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:48:40.919730 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:48:40.927260 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:48:40.982304 ignition[686]: Ignition 2.19.0 Feb 13 19:48:40.982314 ignition[686]: Stage: fetch-offline Feb 13 19:48:40.982351 ignition[686]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:40.982361 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:48:40.982449 ignition[686]: parsed url from cmdline: "" Feb 13 19:48:40.982453 ignition[686]: no config URL provided Feb 13 19:48:40.982458 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:48:40.982466 ignition[686]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:48:40.982497 ignition[686]: op(1): [started] loading QEMU firmware config module Feb 13 19:48:40.982504 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:48:40.990088 ignition[686]: op(1): [finished] loading QEMU firmware config module Feb 13 19:48:41.008403 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:48:41.019182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:48:41.038513 ignition[686]: parsing config with SHA512: 43c7c34b15ee6ab0c9ea4155284b46c80d15da294597aa63e0db5c7ee353be6eb5f82f56ec984cfffffc2c386531f8c62aec601d965a5005593d7f1f4b16a2cd Feb 13 19:48:41.042813 unknown[686]: fetched base config from "system" Feb 13 19:48:41.042831 unknown[686]: fetched user config from "qemu" Feb 13 19:48:41.043733 ignition[686]: fetch-offline: fetch-offline passed Feb 13 19:48:41.043837 ignition[686]: Ignition finished successfully Feb 13 19:48:41.045794 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:48:41.053264 systemd-networkd[785]: lo: Link UP Feb 13 19:48:41.053273 systemd-networkd[785]: lo: Gained carrier Feb 13 19:48:41.056062 systemd-networkd[785]: Enumeration completed Feb 13 19:48:41.056168 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:48:41.056765 systemd[1]: Reached target network.target - Network. Feb 13 19:48:41.057023 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:48:41.062647 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:41.062652 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:48:41.063583 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:48:41.063798 systemd-networkd[785]: eth0: Link UP Feb 13 19:48:41.063803 systemd-networkd[785]: eth0: Gained carrier Feb 13 19:48:41.063810 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:41.080361 ignition[788]: Ignition 2.19.0 Feb 13 19:48:41.080958 ignition[788]: Stage: kargs Feb 13 19:48:41.081166 ignition[788]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:41.081177 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:48:41.082116 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:48:41.081960 ignition[788]: kargs: kargs passed Feb 13 19:48:41.082014 ignition[788]: Ignition finished successfully Feb 13 19:48:41.089946 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:48:41.102409 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:48:41.116581 ignition[797]: Ignition 2.19.0 Feb 13 19:48:41.116592 ignition[797]: Stage: disks Feb 13 19:48:41.116763 ignition[797]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:41.116773 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:48:41.117768 ignition[797]: disks: disks passed Feb 13 19:48:41.117819 ignition[797]: Ignition finished successfully Feb 13 19:48:41.123879 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:48:41.126174 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:48:41.126706 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:48:41.127098 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:48:41.127455 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:48:41.127820 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:48:41.146193 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:48:41.159191 systemd-resolved[233]: Detected conflict on linux IN A 10.0.0.12 Feb 13 19:48:41.159205 systemd-resolved[233]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 13 19:48:41.161761 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:48:41.168542 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:48:41.176176 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:48:41.260085 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 19:48:41.261203 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:48:41.262466 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:48:41.279254 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:48:41.282508 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:48:41.288008 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Feb 13 19:48:41.288040 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:48:41.284221 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:48:41.291713 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:48:41.291733 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:48:41.284273 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:48:41.284301 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:48:41.291683 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:48:41.294063 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:48:41.301180 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:48:41.303111 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:48:41.333095 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:48:41.338141 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:48:41.343146 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:48:41.347773 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:48:41.433237 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:48:41.439221 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:48:41.443419 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:48:41.448074 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:48:41.467950 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:48:41.470158 ignition[928]: INFO : Ignition 2.19.0 Feb 13 19:48:41.470158 ignition[928]: INFO : Stage: mount Feb 13 19:48:41.470158 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:41.470158 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:48:41.470158 ignition[928]: INFO : mount: mount passed Feb 13 19:48:41.470158 ignition[928]: INFO : Ignition finished successfully Feb 13 19:48:41.471652 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:48:41.479297 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:48:41.871823 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:48:41.881293 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:48:41.888075 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Feb 13 19:48:41.890533 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 19:48:41.890556 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:48:41.890579 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:48:41.894077 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:48:41.895148 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:48:41.916856 ignition[959]: INFO : Ignition 2.19.0 Feb 13 19:48:41.916856 ignition[959]: INFO : Stage: files Feb 13 19:48:41.918689 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:41.918689 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:48:41.921886 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:48:41.923316 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:48:41.923316 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:48:41.928487 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:48:41.930124 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:48:41.931934 unknown[959]: wrote ssh authorized keys file for user: core Feb 13 19:48:41.933251 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:48:41.934822 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:48:41.934822 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:48:41.971198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:48:42.108520 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:48:42.108520 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:48:42.112839 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:48:42.114559 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:48:42.116297 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:48:42.117961 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:48:42.119687 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:48:42.121378 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:48:42.123133 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:48:42.124998 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:48:42.127124 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:48:42.129109 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:48:42.131606 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:48:42.134016 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:48:42.136155 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:48:42.519173 systemd-networkd[785]: eth0: Gained IPv6LL Feb 13 19:48:42.654685 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:48:42.941935 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:48:42.941935 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:48:42.945594 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:48:42.947673 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:48:42.947673 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:48:42.947673 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:48:42.951910 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:48:42.953774 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:48:42.955653 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:48:42.955653 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:48:42.976905 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:48:42.982816 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:48:42.984445 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:48:42.984445 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:48:42.984445 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:48:42.984445 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:48:42.984445 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:48:42.984445 ignition[959]: INFO : files: files passed Feb 13 19:48:42.984445 ignition[959]: INFO : Ignition finished successfully Feb 13 19:48:42.996934 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:48:43.010270 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:48:43.012646 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:48:43.013754 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:48:43.013872 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:48:43.027750 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:48:43.031691 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:43.033351 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:43.034880 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:43.038193 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:48:43.040840 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:48:43.056286 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:48:43.083958 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:48:43.084118 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:48:43.084702 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:48:43.088067 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:48:43.088676 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:48:43.093950 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:48:43.114849 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:48:43.127198 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:48:43.136383 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:43.137764 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:43.140079 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:48:43.142034 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:48:43.142159 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:48:43.144273 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:48:43.145957 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:48:43.147954 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:48:43.149987 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:48:43.152130 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:48:43.154418 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:48:43.156487 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:48:43.158794 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:48:43.160772 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:48:43.163149 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:48:43.164959 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:48:43.165079 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:48:43.167456 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:43.168932 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:43.171253 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:48:43.171380 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:43.173812 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:48:43.174004 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:48:43.176438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:48:43.176553 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:48:43.178655 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:48:43.180462 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:48:43.184153 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:43.186603 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:48:43.188783 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:48:43.190743 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:48:43.190885 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:48:43.192888 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:48:43.193024 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:48:43.195498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:48:43.195645 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:48:43.197695 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:48:43.197839 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:48:43.211361 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:48:43.213484 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:48:43.213632 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:43.216860 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:48:43.218311 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:48:43.218592 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:43.225068 ignition[1013]: INFO : Ignition 2.19.0 Feb 13 19:48:43.225068 ignition[1013]: INFO : Stage: umount Feb 13 19:48:43.225068 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:43.225068 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:48:43.225068 ignition[1013]: INFO : umount: umount passed Feb 13 19:48:43.225068 ignition[1013]: INFO : Ignition finished successfully Feb 13 19:48:43.221066 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:48:43.221212 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:48:43.226844 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:48:43.227331 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:48:43.231535 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:48:43.231664 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:48:43.235863 systemd[1]: Stopped target network.target - Network. Feb 13 19:48:43.237529 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:48:43.237610 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:48:43.239575 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:48:43.239639 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:48:43.241669 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:48:43.241730 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:48:43.243644 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:48:43.243710 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:48:43.246490 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:48:43.248693 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:48:43.252107 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:48:43.256173 systemd-networkd[785]: eth0: DHCPv6 lease lost Feb 13 19:48:43.259474 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:48:43.259732 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:48:43.262753 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:48:43.262912 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:48:43.266980 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:48:43.267038 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:43.282309 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:48:43.282847 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:48:43.282972 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:48:43.285772 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:48:43.285836 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:43.286624 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:48:43.286695 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:43.287035 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:48:43.287104 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:43.287751 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:43.306000 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:48:43.306269 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:43.307206 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:48:43.307267 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:43.310428 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:48:43.310469 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:43.310716 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:48:43.310775 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:48:43.311626 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:48:43.311673 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:48:43.319041 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:48:43.319114 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:43.323876 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:48:43.324332 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:48:43.324410 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:43.324735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:43.324792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:43.325502 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:48:43.325633 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:48:43.339436 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:48:43.339574 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:48:43.408685 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:48:43.408847 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:48:43.410912 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:48:43.412003 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:48:43.412083 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:48:43.424329 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:48:43.433788 systemd[1]: Switching root. Feb 13 19:48:43.465220 systemd-journald[191]: Journal stopped Feb 13 19:48:44.595436 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Feb 13 19:48:44.595524 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:48:44.595542 kernel: SELinux: policy capability open_perms=1 Feb 13 19:48:44.595554 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:48:44.595569 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:48:44.595580 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:48:44.595591 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:48:44.595602 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:48:44.595614 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:48:44.595625 kernel: audit: type=1403 audit(1739476123.818:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:48:44.595637 systemd[1]: Successfully loaded SELinux policy in 41.126ms. Feb 13 19:48:44.595662 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.985ms. Feb 13 19:48:44.595676 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:48:44.595688 systemd[1]: Detected virtualization kvm. Feb 13 19:48:44.595700 systemd[1]: Detected architecture x86-64. Feb 13 19:48:44.595712 systemd[1]: Detected first boot. Feb 13 19:48:44.595723 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:48:44.595735 zram_generator::config[1058]: No configuration found. Feb 13 19:48:44.595754 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:48:44.595769 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:48:44.595781 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:48:44.595793 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:48:44.595806 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:48:44.595820 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:48:44.595832 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:48:44.595843 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:48:44.595855 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:48:44.595869 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:48:44.595881 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:48:44.595902 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:48:44.595914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:44.595926 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:44.595939 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:48:44.595951 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:48:44.595962 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:48:44.595975 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:48:44.595989 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:48:44.596000 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:44.596012 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:48:44.596024 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:48:44.596036 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:48:44.596064 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:48:44.596082 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:44.596094 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:48:44.596111 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:48:44.596123 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:48:44.596144 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:48:44.596160 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:48:44.596182 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:44.596202 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:44.596224 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:44.596246 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:48:44.596267 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:48:44.596294 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:48:44.596314 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:48:44.596331 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:48:44.596344 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:48:44.596356 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:48:44.596373 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:48:44.596385 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:48:44.596398 systemd[1]: Reached target machines.target - Containers. Feb 13 19:48:44.596410 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:48:44.596424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:44.596436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:48:44.596448 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:48:44.596461 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:44.596472 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:48:44.596484 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:44.596496 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:48:44.596507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:44.596523 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:48:44.596534 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:48:44.596546 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:48:44.596557 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:48:44.596569 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:48:44.596581 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:48:44.596593 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:48:44.596604 kernel: loop: module loaded Feb 13 19:48:44.596615 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:48:44.596629 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:48:44.596641 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:48:44.596653 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:48:44.596665 systemd[1]: Stopped verity-setup.service. Feb 13 19:48:44.596678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:48:44.596720 systemd-journald[1132]: Collecting audit messages is disabled. Feb 13 19:48:44.596746 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:48:44.596759 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:48:44.596771 kernel: ACPI: bus type drm_connector registered Feb 13 19:48:44.596782 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:48:44.596794 kernel: fuse: init (API version 7.39) Feb 13 19:48:44.596805 systemd-journald[1132]: Journal started Feb 13 19:48:44.596831 systemd-journald[1132]: Runtime Journal (/run/log/journal/1b1c6089dee1457b9c8cd52f6dd0c30f) is 6.0M, max 48.3M, 42.2M free. Feb 13 19:48:44.343992 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:48:44.367123 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:48:44.367618 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:48:44.601233 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:48:44.602080 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:48:44.603355 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:48:44.604798 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:48:44.606308 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:48:44.607966 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:44.609681 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:48:44.609850 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:48:44.611348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:44.611517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:44.613174 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:48:44.613383 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:48:44.614972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:44.615149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:44.616746 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:48:44.616960 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:48:44.618447 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:44.618615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:44.620103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:44.621672 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:48:44.623226 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:48:44.637853 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:48:44.649167 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:48:44.651752 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:48:44.653217 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:48:44.653335 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:48:44.655720 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:48:44.658617 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:48:44.661067 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:48:44.662554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:44.665203 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:48:44.667664 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:48:44.668959 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:44.672023 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:48:44.673437 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:44.674810 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:48:44.679923 systemd-journald[1132]: Time spent on flushing to /var/log/journal/1b1c6089dee1457b9c8cd52f6dd0c30f is 19.254ms for 993 entries. Feb 13 19:48:44.679923 systemd-journald[1132]: System Journal (/var/log/journal/1b1c6089dee1457b9c8cd52f6dd0c30f) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:48:44.707111 systemd-journald[1132]: Received client request to flush runtime journal. Feb 13 19:48:44.682308 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:48:44.686320 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:48:44.689447 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:48:44.691015 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:48:44.692739 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:48:44.694683 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:44.704038 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:48:44.707113 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:48:44.718085 kernel: loop0: detected capacity change from 0 to 142488 Feb 13 19:48:44.723641 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:48:44.728087 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:48:44.730108 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:48:44.733767 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:44.747597 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:48:44.748472 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:48:44.749322 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:48:44.755585 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:48:44.758504 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:48:44.768256 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:48:44.772145 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 19:48:44.793063 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Feb 13 19:48:44.793082 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Feb 13 19:48:44.799418 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:44.804071 kernel: loop2: detected capacity change from 0 to 140768 Feb 13 19:48:44.846096 kernel: loop3: detected capacity change from 0 to 142488 Feb 13 19:48:44.858084 kernel: loop4: detected capacity change from 0 to 210664 Feb 13 19:48:44.866073 kernel: loop5: detected capacity change from 0 to 140768 Feb 13 19:48:44.878328 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:48:44.878920 (sd-merge)[1197]: Merged extensions into '/usr'. Feb 13 19:48:44.883214 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:48:44.883326 systemd[1]: Reloading... Feb 13 19:48:44.944079 zram_generator::config[1223]: No configuration found. Feb 13 19:48:45.013541 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:48:45.072750 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:48:45.122786 systemd[1]: Reloading finished in 238 ms. Feb 13 19:48:45.159000 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:48:45.160787 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:48:45.171236 systemd[1]: Starting ensure-sysext.service... Feb 13 19:48:45.173878 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:48:45.181663 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:48:45.181685 systemd[1]: Reloading... Feb 13 19:48:45.194696 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:48:45.195155 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:48:45.196138 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:48:45.196435 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Feb 13 19:48:45.196516 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Feb 13 19:48:45.199886 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:48:45.199898 systemd-tmpfiles[1261]: Skipping /boot Feb 13 19:48:45.210390 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:48:45.210405 systemd-tmpfiles[1261]: Skipping /boot Feb 13 19:48:45.244308 zram_generator::config[1291]: No configuration found. Feb 13 19:48:45.347022 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:48:45.396078 systemd[1]: Reloading finished in 213 ms. Feb 13 19:48:45.417581 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:48:45.429833 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:45.441065 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:48:45.444398 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:48:45.447529 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:48:45.452315 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:48:45.457607 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:45.463396 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:48:45.467808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:48:45.468040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:45.474147 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:45.478305 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:45.483197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:45.484838 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:45.487426 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:48:45.488685 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:48:45.490346 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:48:45.492677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:45.492980 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:45.496519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:45.496790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:45.497643 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Feb 13 19:48:45.499286 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:45.499534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:45.508868 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:45.509165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:45.514983 augenrules[1356]: No rules Feb 13 19:48:45.519438 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:48:45.521588 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:48:45.524277 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:45.530634 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:48:45.532974 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:48:45.550552 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:48:45.557177 systemd[1]: Finished ensure-sysext.service. Feb 13 19:48:45.561642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:48:45.561964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:45.567692 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:45.580315 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:48:45.586241 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:45.591255 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:45.592494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:45.595943 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:48:45.602392 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1374) Feb 13 19:48:45.601434 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:48:45.602882 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:48:45.604070 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:48:45.606218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:45.607011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:45.609034 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:48:45.609269 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:48:45.611235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:45.611411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:45.623163 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:48:45.628307 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:45.628545 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:45.632644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:45.632709 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:45.632751 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:48:45.667989 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:48:45.678556 systemd-resolved[1332]: Positive Trust Anchors: Feb 13 19:48:45.678577 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:48:45.678608 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:48:45.681438 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:48:45.683776 systemd-resolved[1332]: Defaulting to hostname 'linux'. Feb 13 19:48:45.689705 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:48:45.688231 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:48:45.690094 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:45.693093 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:48:45.699268 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:48:45.706728 systemd-networkd[1399]: lo: Link UP Feb 13 19:48:45.706742 systemd-networkd[1399]: lo: Gained carrier Feb 13 19:48:45.710423 systemd-networkd[1399]: Enumeration completed Feb 13 19:48:45.710832 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:45.710836 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:48:45.711139 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:48:45.712760 systemd[1]: Reached target network.target - Network. Feb 13 19:48:45.715262 systemd-networkd[1399]: eth0: Link UP Feb 13 19:48:45.715274 systemd-networkd[1399]: eth0: Gained carrier Feb 13 19:48:45.715289 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:45.722457 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:48:45.729166 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:48:45.730156 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:48:45.730419 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Feb 13 19:48:46.737327 systemd-resolved[1332]: Clock change detected. Flushing caches. Feb 13 19:48:46.737385 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:48:46.737434 systemd-timesyncd[1400]: Initial clock synchronization to Thu 2025-02-13 19:48:46.737286 UTC. Feb 13 19:48:46.737818 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:48:46.744768 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:48:46.748047 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 19:48:46.749833 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:48:46.750011 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:48:46.750199 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:48:46.770091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:46.786076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:46.786355 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:46.793755 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:48:46.804869 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:46.868155 kernel: kvm_amd: TSC scaling supported Feb 13 19:48:46.868245 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:48:46.868289 kernel: kvm_amd: Nested Paging enabled Feb 13 19:48:46.868302 kernel: kvm_amd: LBR virtualization supported Feb 13 19:48:46.869303 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:48:46.869327 kernel: kvm_amd: Virtual GIF supported Feb 13 19:48:46.891780 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:48:46.906132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:46.926483 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:48:46.943006 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:48:46.951245 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:48:46.987998 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:48:46.989804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:46.991114 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:48:46.992372 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:48:46.993653 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:48:46.995123 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:48:46.996322 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:48:46.997615 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:48:46.998887 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:48:46.998915 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:48:46.999846 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:48:47.001675 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:48:47.004535 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:48:47.017580 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:48:47.020167 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:48:47.021782 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:48:47.022940 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:48:47.023946 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:48:47.024921 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:48:47.024954 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:48:47.025924 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:48:47.028016 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:48:47.031733 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:48:47.032111 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:48:47.036490 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:48:47.038271 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:48:47.042894 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:48:47.043383 jq[1438]: false Feb 13 19:48:47.045051 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:48:47.049328 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:48:47.055473 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:48:47.063893 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:48:47.064207 extend-filesystems[1439]: Found loop3 Feb 13 19:48:47.067353 extend-filesystems[1439]: Found loop4 Feb 13 19:48:47.067353 extend-filesystems[1439]: Found loop5 Feb 13 19:48:47.067353 extend-filesystems[1439]: Found sr0 Feb 13 19:48:47.067353 extend-filesystems[1439]: Found vda Feb 13 19:48:47.067353 extend-filesystems[1439]: Found vda1 Feb 13 19:48:47.067353 extend-filesystems[1439]: Found vda2 Feb 13 19:48:47.067353 extend-filesystems[1439]: Found vda3 Feb 13 19:48:47.067353 extend-filesystems[1439]: Found usr Feb 13 19:48:47.067353 extend-filesystems[1439]: Found vda4 Feb 13 19:48:47.067353 extend-filesystems[1439]: Found vda6 Feb 13 19:48:47.067353 extend-filesystems[1439]: Found vda7 Feb 13 19:48:47.067353 extend-filesystems[1439]: Found vda9 Feb 13 19:48:47.067353 extend-filesystems[1439]: Checking size of /dev/vda9 Feb 13 19:48:47.065458 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:48:47.069089 dbus-daemon[1437]: [system] SELinux support is enabled Feb 13 19:48:47.087495 extend-filesystems[1439]: Resized partition /dev/vda9 Feb 13 19:48:47.066034 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:48:47.092866 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:48:47.098867 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:48:47.069608 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:48:47.077856 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:48:47.080206 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:48:47.083318 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:48:47.092952 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:48:47.093545 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:48:47.093904 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:48:47.094090 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:48:47.100226 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:48:47.102870 jq[1457]: true Feb 13 19:48:47.101062 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:48:47.112816 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1367) Feb 13 19:48:47.121681 jq[1463]: true Feb 13 19:48:47.122617 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:48:47.147099 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:48:47.147135 extend-filesystems[1460]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:48:47.147135 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:48:47.147135 extend-filesystems[1460]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:48:47.158744 update_engine[1453]: I20250213 19:48:47.134693 1453 main.cc:92] Flatcar Update Engine starting Feb 13 19:48:47.158744 update_engine[1453]: I20250213 19:48:47.150310 1453 update_check_scheduler.cc:74] Next update check in 9m25s Feb 13 19:48:47.142172 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:48:47.162928 extend-filesystems[1439]: Resized filesystem in /dev/vda9 Feb 13 19:48:47.142198 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:48:47.170455 tar[1462]: linux-amd64/helm Feb 13 19:48:47.145804 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:48:47.145822 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:48:47.147966 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:48:47.149527 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:48:47.152266 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:48:47.152294 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:48:47.159025 systemd-logind[1450]: New seat seat0. Feb 13 19:48:47.171307 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:48:47.176337 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:48:47.191202 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:48:47.210860 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:48:47.232248 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:48:47.240170 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:48:47.246912 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:48:47.255588 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:48:47.255838 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:48:47.270011 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:48:47.283863 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:48:47.289911 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:48:47.296121 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:48:47.299177 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:48:47.300637 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:48:47.302994 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:48:47.306302 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:48:47.397558 containerd[1464]: time="2025-02-13T19:48:47.397053626Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:48:47.424456 containerd[1464]: time="2025-02-13T19:48:47.424400361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:47.426692 containerd[1464]: time="2025-02-13T19:48:47.426658957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:47.426784 containerd[1464]: time="2025-02-13T19:48:47.426768502Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:48:47.426853 containerd[1464]: time="2025-02-13T19:48:47.426840477Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:48:47.427088 containerd[1464]: time="2025-02-13T19:48:47.427071190Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:48:47.427152 containerd[1464]: time="2025-02-13T19:48:47.427139197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:47.427261 containerd[1464]: time="2025-02-13T19:48:47.427243062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:47.427308 containerd[1464]: time="2025-02-13T19:48:47.427296532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:47.427727 containerd[1464]: time="2025-02-13T19:48:47.427687876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:47.427785 containerd[1464]: time="2025-02-13T19:48:47.427771353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:47.427833 containerd[1464]: time="2025-02-13T19:48:47.427820365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:47.428620 containerd[1464]: time="2025-02-13T19:48:47.427876400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:47.428620 containerd[1464]: time="2025-02-13T19:48:47.427971939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:47.428620 containerd[1464]: time="2025-02-13T19:48:47.428222669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:47.428620 containerd[1464]: time="2025-02-13T19:48:47.428352022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:47.428620 containerd[1464]: time="2025-02-13T19:48:47.428364375Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:48:47.428620 containerd[1464]: time="2025-02-13T19:48:47.428459964Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:48:47.428620 containerd[1464]: time="2025-02-13T19:48:47.428524134Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:48:47.434380 containerd[1464]: time="2025-02-13T19:48:47.434350230Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:48:47.434426 containerd[1464]: time="2025-02-13T19:48:47.434405694Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:48:47.434455 containerd[1464]: time="2025-02-13T19:48:47.434427555Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:48:47.434455 containerd[1464]: time="2025-02-13T19:48:47.434447583Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:48:47.434530 containerd[1464]: time="2025-02-13T19:48:47.434467420Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:48:47.434813 containerd[1464]: time="2025-02-13T19:48:47.434643701Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:48:47.435019 containerd[1464]: time="2025-02-13T19:48:47.434958381Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:48:47.435106 containerd[1464]: time="2025-02-13T19:48:47.435086080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:48:47.435187 containerd[1464]: time="2025-02-13T19:48:47.435108342Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:48:47.435187 containerd[1464]: time="2025-02-13T19:48:47.435123982Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:48:47.435187 containerd[1464]: time="2025-02-13T19:48:47.435142005Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:48:47.435415 containerd[1464]: time="2025-02-13T19:48:47.435345287Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435488215Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435680445Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435706153Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435740608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435757580Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435773079Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435799418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435818684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435835406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435853820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435869369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435887193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435902612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.437685 containerd[1464]: time="2025-02-13T19:48:47.435919954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.435936996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.435956964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.435972783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.435988974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.436005505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.436024791Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.436051301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.436066970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.436081397Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.436141620Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.436161627Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.436175383Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:48:47.438053 containerd[1464]: time="2025-02-13T19:48:47.436196172Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:48:47.438360 containerd[1464]: time="2025-02-13T19:48:47.436209688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.438360 containerd[1464]: time="2025-02-13T19:48:47.436224766Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:48:47.438360 containerd[1464]: time="2025-02-13T19:48:47.436237319Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:48:47.438360 containerd[1464]: time="2025-02-13T19:48:47.436250284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.436591393Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.436660633Z" level=info msg="Connect containerd service" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.436703323Z" level=info msg="using legacy CRI server" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.436728290Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.436849007Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.437541706Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.437680576Z" level=info msg="Start subscribing containerd event" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.437727504Z" level=info msg="Start recovering state" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.437787296Z" level=info msg="Start event monitor" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.437801503Z" level=info msg="Start snapshots syncer" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.437810460Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.437817723Z" level=info msg="Start streaming server" Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.438199830Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.438245495Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:48:47.439695 containerd[1464]: time="2025-02-13T19:48:47.439285265Z" level=info msg="containerd successfully booted in 0.043924s" Feb 13 19:48:47.438374 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:48:47.552754 tar[1462]: linux-amd64/LICENSE Feb 13 19:48:47.552881 tar[1462]: linux-amd64/README.md Feb 13 19:48:47.573638 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:48:48.196956 systemd-networkd[1399]: eth0: Gained IPv6LL Feb 13 19:48:48.201540 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:48:48.203971 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:48:48.219208 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:48:48.222568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:48:48.225468 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:48:48.253204 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:48:48.253537 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:48:48.255766 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:48:48.258796 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:48:48.860732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:48:48.862489 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:48:48.863770 systemd[1]: Startup finished in 674ms (kernel) + 5.118s (initrd) + 4.080s (userspace) = 9.873s. Feb 13 19:48:48.866243 (kubelet)[1549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:48:49.282563 kubelet[1549]: E0213 19:48:49.282411 1549 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:48:49.286660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:48:49.286903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:48:56.985602 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:48:56.986743 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:48848.service - OpenSSH per-connection server daemon (10.0.0.1:48848). Feb 13 19:48:57.027619 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 48848 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:48:57.029425 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:57.038277 systemd-logind[1450]: New session 1 of user core. Feb 13 19:48:57.039506 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:48:57.051926 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:48:57.062267 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:48:57.064981 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:48:57.073528 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:48:57.183285 systemd[1567]: Queued start job for default target default.target. Feb 13 19:48:57.192982 systemd[1567]: Created slice app.slice - User Application Slice. Feb 13 19:48:57.193007 systemd[1567]: Reached target paths.target - Paths. Feb 13 19:48:57.193021 systemd[1567]: Reached target timers.target - Timers. Feb 13 19:48:57.194529 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:48:57.205064 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:48:57.205187 systemd[1567]: Reached target sockets.target - Sockets. Feb 13 19:48:57.205206 systemd[1567]: Reached target basic.target - Basic System. Feb 13 19:48:57.205253 systemd[1567]: Reached target default.target - Main User Target. Feb 13 19:48:57.205286 systemd[1567]: Startup finished in 125ms. Feb 13 19:48:57.205680 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:48:57.207188 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:48:57.269230 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:48862.service - OpenSSH per-connection server daemon (10.0.0.1:48862). Feb 13 19:48:57.306592 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 48862 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:48:57.308068 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:57.311763 systemd-logind[1450]: New session 2 of user core. Feb 13 19:48:57.324837 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:48:57.377680 sshd[1578]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:57.394222 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:48862.service: Deactivated successfully. Feb 13 19:48:57.395964 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:48:57.397485 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:48:57.398653 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:48870.service - OpenSSH per-connection server daemon (10.0.0.1:48870). Feb 13 19:48:57.399376 systemd-logind[1450]: Removed session 2. Feb 13 19:48:57.459859 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 48870 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:48:57.461324 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:57.464763 systemd-logind[1450]: New session 3 of user core. Feb 13 19:48:57.474826 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:48:57.524030 sshd[1585]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:57.534057 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:48870.service: Deactivated successfully. Feb 13 19:48:57.535674 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:48:57.537254 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:48:57.545989 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:48874.service - OpenSSH per-connection server daemon (10.0.0.1:48874). Feb 13 19:48:57.546743 systemd-logind[1450]: Removed session 3. Feb 13 19:48:57.580254 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 48874 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:48:57.581629 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:57.585246 systemd-logind[1450]: New session 4 of user core. Feb 13 19:48:57.594832 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:48:57.648189 sshd[1592]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:57.666302 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:48874.service: Deactivated successfully. Feb 13 19:48:57.668074 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:48:57.669728 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:48:57.670968 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:48886.service - OpenSSH per-connection server daemon (10.0.0.1:48886). Feb 13 19:48:57.671580 systemd-logind[1450]: Removed session 4. Feb 13 19:48:57.708582 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 48886 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:48:57.710052 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:57.713348 systemd-logind[1450]: New session 5 of user core. Feb 13 19:48:57.724825 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:48:57.781725 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:48:57.782058 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:48:57.797573 sudo[1602]: pam_unix(sudo:session): session closed for user root Feb 13 19:48:57.799543 sshd[1599]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:57.817425 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:48886.service: Deactivated successfully. Feb 13 19:48:57.819117 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:48:57.820726 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:48:57.828919 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:48894.service - OpenSSH per-connection server daemon (10.0.0.1:48894). Feb 13 19:48:57.829778 systemd-logind[1450]: Removed session 5. Feb 13 19:48:57.862930 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 48894 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:48:57.864446 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:57.867973 systemd-logind[1450]: New session 6 of user core. Feb 13 19:48:57.878828 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:48:57.931620 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:48:57.931962 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:48:57.935334 sudo[1611]: pam_unix(sudo:session): session closed for user root Feb 13 19:48:57.941698 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:48:57.942074 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:48:57.957921 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:48:57.959450 auditctl[1614]: No rules Feb 13 19:48:57.959853 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:48:57.960056 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:48:57.962625 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:48:57.992262 augenrules[1632]: No rules Feb 13 19:48:57.993978 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:48:57.995552 sudo[1610]: pam_unix(sudo:session): session closed for user root Feb 13 19:48:57.997120 sshd[1607]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:58.009472 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:48894.service: Deactivated successfully. Feb 13 19:48:58.011207 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:48:58.012784 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:48:58.024014 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:48898.service - OpenSSH per-connection server daemon (10.0.0.1:48898). Feb 13 19:48:58.024813 systemd-logind[1450]: Removed session 6. Feb 13 19:48:58.057141 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 48898 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:48:58.058513 sshd[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:58.061973 systemd-logind[1450]: New session 7 of user core. Feb 13 19:48:58.071829 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:48:58.124474 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:48:58.124821 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:48:58.403923 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:48:58.404087 (dockerd)[1663]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:48:58.911916 dockerd[1663]: time="2025-02-13T19:48:58.911849068Z" level=info msg="Starting up" Feb 13 19:48:58.992672 systemd[1]: var-lib-docker-metacopy\x2dcheck259286573-merged.mount: Deactivated successfully. Feb 13 19:48:59.018153 dockerd[1663]: time="2025-02-13T19:48:59.017799983Z" level=info msg="Loading containers: start." Feb 13 19:48:59.124751 kernel: Initializing XFRM netlink socket Feb 13 19:48:59.201601 systemd-networkd[1399]: docker0: Link UP Feb 13 19:48:59.229628 dockerd[1663]: time="2025-02-13T19:48:59.229581139Z" level=info msg="Loading containers: done." Feb 13 19:48:59.243512 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1214691557-merged.mount: Deactivated successfully. Feb 13 19:48:59.246384 dockerd[1663]: time="2025-02-13T19:48:59.246348954Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:48:59.246452 dockerd[1663]: time="2025-02-13T19:48:59.246425217Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:48:59.246558 dockerd[1663]: time="2025-02-13T19:48:59.246534653Z" level=info msg="Daemon has completed initialization" Feb 13 19:48:59.284442 dockerd[1663]: time="2025-02-13T19:48:59.284363936Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:48:59.285459 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:48:59.537101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:48:59.547021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:48:59.700300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:48:59.704456 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:48:59.743693 kubelet[1820]: E0213 19:48:59.743643 1820 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:48:59.750276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:48:59.750475 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:48:59.956488 containerd[1464]: time="2025-02-13T19:48:59.956357925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:49:00.822196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025369044.mount: Deactivated successfully. Feb 13 19:49:01.766260 containerd[1464]: time="2025-02-13T19:49:01.766183924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:01.766849 containerd[1464]: time="2025-02-13T19:49:01.766801843Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 19:49:01.767957 containerd[1464]: time="2025-02-13T19:49:01.767924769Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:01.770533 containerd[1464]: time="2025-02-13T19:49:01.770475633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:01.771785 containerd[1464]: time="2025-02-13T19:49:01.771741407Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 1.81533947s" Feb 13 19:49:01.771785 containerd[1464]: time="2025-02-13T19:49:01.771779358Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 19:49:01.792850 containerd[1464]: time="2025-02-13T19:49:01.792799688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:49:04.065684 containerd[1464]: time="2025-02-13T19:49:04.065615710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:04.066380 containerd[1464]: time="2025-02-13T19:49:04.066319470Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 19:49:04.067593 containerd[1464]: time="2025-02-13T19:49:04.067551931Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:04.070429 containerd[1464]: time="2025-02-13T19:49:04.070384934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:04.071500 containerd[1464]: time="2025-02-13T19:49:04.071464799Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.27862755s" Feb 13 19:49:04.071500 containerd[1464]: time="2025-02-13T19:49:04.071495707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 19:49:04.096966 containerd[1464]: time="2025-02-13T19:49:04.096914977Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:49:05.383265 containerd[1464]: time="2025-02-13T19:49:05.383197536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:05.384239 containerd[1464]: time="2025-02-13T19:49:05.384198363Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 19:49:05.385733 containerd[1464]: time="2025-02-13T19:49:05.385690070Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:05.388746 containerd[1464]: time="2025-02-13T19:49:05.388723680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:05.389615 containerd[1464]: time="2025-02-13T19:49:05.389569065Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.292627067s" Feb 13 19:49:05.389653 containerd[1464]: time="2025-02-13T19:49:05.389616614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 19:49:05.411474 containerd[1464]: time="2025-02-13T19:49:05.411435963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:49:06.778995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543527959.mount: Deactivated successfully. Feb 13 19:49:07.542985 containerd[1464]: time="2025-02-13T19:49:07.542899358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:07.544049 containerd[1464]: time="2025-02-13T19:49:07.544014539Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 19:49:07.576555 containerd[1464]: time="2025-02-13T19:49:07.576502716Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:07.622805 containerd[1464]: time="2025-02-13T19:49:07.622708739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:07.623365 containerd[1464]: time="2025-02-13T19:49:07.623309696Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.211836453s" Feb 13 19:49:07.623365 containerd[1464]: time="2025-02-13T19:49:07.623356233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:49:07.646104 containerd[1464]: time="2025-02-13T19:49:07.646071282Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:49:08.183400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103546333.mount: Deactivated successfully. Feb 13 19:49:09.357132 containerd[1464]: time="2025-02-13T19:49:09.357052693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:09.358699 containerd[1464]: time="2025-02-13T19:49:09.358651171Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:49:09.360196 containerd[1464]: time="2025-02-13T19:49:09.360160601Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:09.363104 containerd[1464]: time="2025-02-13T19:49:09.363057604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:09.364122 containerd[1464]: time="2025-02-13T19:49:09.364089580Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.717986508s" Feb 13 19:49:09.364177 containerd[1464]: time="2025-02-13T19:49:09.364122672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:49:09.385866 containerd[1464]: time="2025-02-13T19:49:09.385814161Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:49:09.866521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:49:09.873986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:09.875356 containerd[1464]: time="2025-02-13T19:49:09.875317184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:09.875525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77665330.mount: Deactivated successfully. Feb 13 19:49:09.876651 containerd[1464]: time="2025-02-13T19:49:09.876607985Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 19:49:09.878113 containerd[1464]: time="2025-02-13T19:49:09.878072090Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:09.880312 containerd[1464]: time="2025-02-13T19:49:09.880260685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:09.881140 containerd[1464]: time="2025-02-13T19:49:09.881095821Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 495.239571ms" Feb 13 19:49:09.881140 containerd[1464]: time="2025-02-13T19:49:09.881136778Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 19:49:09.904832 containerd[1464]: time="2025-02-13T19:49:09.904781249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:49:10.030476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:10.035819 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:10.122898 kubelet[2000]: E0213 19:49:10.122759 2000 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:10.126990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:10.127200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:10.616748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318870040.mount: Deactivated successfully. Feb 13 19:49:12.194493 containerd[1464]: time="2025-02-13T19:49:12.194443273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.195329 containerd[1464]: time="2025-02-13T19:49:12.195294209Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 19:49:12.196749 containerd[1464]: time="2025-02-13T19:49:12.196664729Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.199405 containerd[1464]: time="2025-02-13T19:49:12.199375543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.200497 containerd[1464]: time="2025-02-13T19:49:12.200469725Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.29564837s" Feb 13 19:49:12.200526 containerd[1464]: time="2025-02-13T19:49:12.200500322Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 19:49:14.283447 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:14.294927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:14.312132 systemd[1]: Reloading requested from client PID 2135 ('systemctl') (unit session-7.scope)... Feb 13 19:49:14.312149 systemd[1]: Reloading... Feb 13 19:49:14.383738 zram_generator::config[2177]: No configuration found. Feb 13 19:49:14.557821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:14.633662 systemd[1]: Reloading finished in 321 ms. Feb 13 19:49:14.685071 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:14.687948 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:49:14.688182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:14.689653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:14.830498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:14.835030 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:14.878036 kubelet[2224]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:14.878036 kubelet[2224]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:14.878036 kubelet[2224]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:14.878420 kubelet[2224]: I0213 19:49:14.878065 2224 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:15.145926 kubelet[2224]: I0213 19:49:15.145805 2224 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:49:15.145926 kubelet[2224]: I0213 19:49:15.145835 2224 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:15.146055 kubelet[2224]: I0213 19:49:15.146042 2224 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:49:15.159671 kubelet[2224]: I0213 19:49:15.159618 2224 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:15.160546 kubelet[2224]: E0213 19:49:15.160497 2224 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:15.171669 kubelet[2224]: I0213 19:49:15.171641 2224 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:15.173508 kubelet[2224]: I0213 19:49:15.173465 2224 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:15.173695 kubelet[2224]: I0213 19:49:15.173502 2224 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:49:15.174123 kubelet[2224]: I0213 19:49:15.174100 2224 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:15.174123 kubelet[2224]: I0213 19:49:15.174117 2224 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:49:15.174276 kubelet[2224]: I0213 19:49:15.174257 2224 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:15.174879 kubelet[2224]: I0213 19:49:15.174857 2224 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:49:15.174879 kubelet[2224]: I0213 19:49:15.174874 2224 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:15.174932 kubelet[2224]: I0213 19:49:15.174897 2224 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:49:15.174932 kubelet[2224]: I0213 19:49:15.174917 2224 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:15.179018 kubelet[2224]: W0213 19:49:15.178974 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:15.179109 kubelet[2224]: E0213 19:49:15.179077 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:15.179733 kubelet[2224]: I0213 19:49:15.179490 2224 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:49:15.180609 kubelet[2224]: W0213 19:49:15.180548 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:15.180609 kubelet[2224]: E0213 19:49:15.180614 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:15.181087 kubelet[2224]: I0213 19:49:15.181063 2224 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:15.181136 kubelet[2224]: W0213 19:49:15.181121 2224 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:49:15.181864 kubelet[2224]: I0213 19:49:15.181728 2224 server.go:1264] "Started kubelet" Feb 13 19:49:15.183330 kubelet[2224]: I0213 19:49:15.183008 2224 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:15.185356 kubelet[2224]: I0213 19:49:15.185310 2224 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:15.186231 kubelet[2224]: I0213 19:49:15.186201 2224 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:49:15.186945 kubelet[2224]: I0213 19:49:15.186881 2224 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:49:15.186980 kubelet[2224]: I0213 19:49:15.186968 2224 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:49:15.187177 kubelet[2224]: I0213 19:49:15.187042 2224 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:15.187564 kubelet[2224]: I0213 19:49:15.187518 2224 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:15.187859 kubelet[2224]: I0213 19:49:15.187833 2224 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:15.187927 kubelet[2224]: W0213 19:49:15.187528 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:15.187927 kubelet[2224]: E0213 19:49:15.187901 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:15.189292 kubelet[2224]: E0213 19:49:15.188355 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Feb 13 19:49:15.189292 kubelet[2224]: E0213 19:49:15.188650 2224 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dc58d4ddee78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:49:15.181690488 +0000 UTC m=+0.342821223,LastTimestamp:2025-02-13 19:49:15.181690488 +0000 UTC m=+0.342821223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:49:15.189292 kubelet[2224]: E0213 19:49:15.188873 2224 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:49:15.189292 kubelet[2224]: I0213 19:49:15.188921 2224 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:15.189292 kubelet[2224]: I0213 19:49:15.188984 2224 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:15.190120 kubelet[2224]: I0213 19:49:15.190105 2224 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:15.200704 kubelet[2224]: I0213 19:49:15.200640 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:15.201907 kubelet[2224]: I0213 19:49:15.201884 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:15.201952 kubelet[2224]: I0213 19:49:15.201919 2224 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:49:15.201952 kubelet[2224]: I0213 19:49:15.201938 2224 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:49:15.202002 kubelet[2224]: E0213 19:49:15.201980 2224 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:49:15.202862 kubelet[2224]: W0213 19:49:15.202656 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:15.202862 kubelet[2224]: E0213 19:49:15.202703 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:15.207325 kubelet[2224]: I0213 19:49:15.207308 2224 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:49:15.207325 kubelet[2224]: I0213 19:49:15.207323 2224 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:15.207391 kubelet[2224]: I0213 19:49:15.207349 2224 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:15.288782 kubelet[2224]: I0213 19:49:15.288735 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:49:15.289034 kubelet[2224]: E0213 19:49:15.289006 2224 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Feb 13 19:49:15.302247 kubelet[2224]: E0213 19:49:15.302217 2224 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:49:15.388680 kubelet[2224]: E0213 19:49:15.388643 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Feb 13 19:49:15.491194 kubelet[2224]: I0213 19:49:15.491078 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:49:15.491853 kubelet[2224]: E0213 19:49:15.491818 2224 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Feb 13 19:49:15.502925 kubelet[2224]: E0213 19:49:15.502891 2224 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:49:15.789757 kubelet[2224]: E0213 19:49:15.789611 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Feb 13 19:49:15.879857 kubelet[2224]: I0213 19:49:15.879809 2224 policy_none.go:49] "None policy: Start" Feb 13 19:49:15.880734 kubelet[2224]: I0213 19:49:15.880682 2224 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:49:15.880786 kubelet[2224]: I0213 19:49:15.880752 2224 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:15.891105 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:49:15.893569 kubelet[2224]: I0213 19:49:15.893548 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:49:15.893898 kubelet[2224]: E0213 19:49:15.893872 2224 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Feb 13 19:49:15.903285 kubelet[2224]: E0213 19:49:15.903264 2224 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:49:15.904405 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:49:15.907652 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:49:15.915752 kubelet[2224]: I0213 19:49:15.915626 2224 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:15.915890 kubelet[2224]: I0213 19:49:15.915850 2224 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:15.915974 kubelet[2224]: I0213 19:49:15.915956 2224 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:15.917027 kubelet[2224]: E0213 19:49:15.916996 2224 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:49:16.039858 kubelet[2224]: W0213 19:49:16.039673 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:16.039858 kubelet[2224]: E0213 19:49:16.039785 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:16.185425 kubelet[2224]: W0213 19:49:16.185367 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:16.185425 kubelet[2224]: E0213 19:49:16.185421 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:16.435863 kubelet[2224]: W0213 19:49:16.435739 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:16.435863 kubelet[2224]: E0213 19:49:16.435791 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:16.591089 kubelet[2224]: E0213 19:49:16.591005 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" Feb 13 19:49:16.645510 kubelet[2224]: W0213 19:49:16.645475 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:16.645510 kubelet[2224]: E0213 19:49:16.645510 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:16.695538 kubelet[2224]: I0213 19:49:16.695411 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:49:16.695832 kubelet[2224]: E0213 19:49:16.695801 2224 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Feb 13 19:49:16.703983 kubelet[2224]: I0213 19:49:16.703936 2224 topology_manager.go:215] "Topology Admit Handler" podUID="2feeb9d46c9e6fec647a15dcaa701035" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:49:16.704884 kubelet[2224]: I0213 19:49:16.704843 2224 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:49:16.705448 kubelet[2224]: I0213 19:49:16.705431 2224 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:49:16.712376 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 19:49:16.735485 systemd[1]: Created slice kubepods-burstable-pod2feeb9d46c9e6fec647a15dcaa701035.slice - libcontainer container kubepods-burstable-pod2feeb9d46c9e6fec647a15dcaa701035.slice. Feb 13 19:49:16.739054 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 19:49:16.795896 kubelet[2224]: I0213 19:49:16.795842 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:49:16.795896 kubelet[2224]: I0213 19:49:16.795875 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:49:16.795896 kubelet[2224]: I0213 19:49:16.795893 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:49:16.796109 kubelet[2224]: I0213 19:49:16.795934 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:49:16.796109 kubelet[2224]: I0213 19:49:16.795954 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2feeb9d46c9e6fec647a15dcaa701035-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2feeb9d46c9e6fec647a15dcaa701035\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:49:16.796109 kubelet[2224]: I0213 19:49:16.795970 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:49:16.796109 kubelet[2224]: I0213 19:49:16.795988 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:49:16.796109 kubelet[2224]: I0213 19:49:16.796072 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2feeb9d46c9e6fec647a15dcaa701035-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2feeb9d46c9e6fec647a15dcaa701035\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:49:16.796269 kubelet[2224]: I0213 19:49:16.796122 2224 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2feeb9d46c9e6fec647a15dcaa701035-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2feeb9d46c9e6fec647a15dcaa701035\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:49:17.033289 kubelet[2224]: E0213 19:49:17.033152 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:17.033800 containerd[1464]: time="2025-02-13T19:49:17.033775538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:17.038275 kubelet[2224]: E0213 19:49:17.038246 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:17.038815 containerd[1464]: time="2025-02-13T19:49:17.038760066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2feeb9d46c9e6fec647a15dcaa701035,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:17.040960 kubelet[2224]: E0213 19:49:17.040921 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:17.041294 containerd[1464]: time="2025-02-13T19:49:17.041251108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:17.309008 kubelet[2224]: E0213 19:49:17.308938 2224 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:17.897661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738942667.mount: Deactivated successfully. Feb 13 19:49:17.905109 containerd[1464]: time="2025-02-13T19:49:17.904946695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:17.906958 containerd[1464]: time="2025-02-13T19:49:17.906904717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:17.907779 containerd[1464]: time="2025-02-13T19:49:17.907748740Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:17.908783 containerd[1464]: time="2025-02-13T19:49:17.908705394Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:17.909799 containerd[1464]: time="2025-02-13T19:49:17.909657971Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:17.910843 containerd[1464]: time="2025-02-13T19:49:17.910755078Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:17.911600 containerd[1464]: time="2025-02-13T19:49:17.911561851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:49:17.913215 containerd[1464]: time="2025-02-13T19:49:17.913173644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:17.913994 containerd[1464]: time="2025-02-13T19:49:17.913959708Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 880.116703ms" Feb 13 19:49:17.916578 containerd[1464]: time="2025-02-13T19:49:17.916547301Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 875.220661ms" Feb 13 19:49:17.918869 containerd[1464]: time="2025-02-13T19:49:17.918841553Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 880.003251ms" Feb 13 19:49:17.956399 kubelet[2224]: W0213 19:49:17.956339 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:17.956399 kubelet[2224]: E0213 19:49:17.956392 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:18.060016 containerd[1464]: time="2025-02-13T19:49:18.059471905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:18.060016 containerd[1464]: time="2025-02-13T19:49:18.059511520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:18.060016 containerd[1464]: time="2025-02-13T19:49:18.059540153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:18.060016 containerd[1464]: time="2025-02-13T19:49:18.059627848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:18.060016 containerd[1464]: time="2025-02-13T19:49:18.059365566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:18.060016 containerd[1464]: time="2025-02-13T19:49:18.059446618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:18.060016 containerd[1464]: time="2025-02-13T19:49:18.059472386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:18.060016 containerd[1464]: time="2025-02-13T19:49:18.059581250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:18.060601 containerd[1464]: time="2025-02-13T19:49:18.060384126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:18.060601 containerd[1464]: time="2025-02-13T19:49:18.060454498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:18.060601 containerd[1464]: time="2025-02-13T19:49:18.060474626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:18.060947 containerd[1464]: time="2025-02-13T19:49:18.060649724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:18.085941 systemd[1]: Started cri-containerd-38b87f59e69180d60254783ed5c95a60b14b3d22244ce074ea27e8db9c6c2785.scope - libcontainer container 38b87f59e69180d60254783ed5c95a60b14b3d22244ce074ea27e8db9c6c2785. Feb 13 19:49:18.090289 kubelet[2224]: W0213 19:49:18.090237 2224 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:18.090289 kubelet[2224]: E0213 19:49:18.090275 2224 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Feb 13 19:49:18.091057 systemd[1]: Started cri-containerd-3a6a8df8a9fa91997ccde52c534252e344798e1dbce4af231f8dabcabb64bba4.scope - libcontainer container 3a6a8df8a9fa91997ccde52c534252e344798e1dbce4af231f8dabcabb64bba4. Feb 13 19:49:18.093401 systemd[1]: Started cri-containerd-f90c4bfaee024a02220a34540b97fca9000399b1efd070eeaa894ef86491898d.scope - libcontainer container f90c4bfaee024a02220a34540b97fca9000399b1efd070eeaa894ef86491898d. Feb 13 19:49:18.124371 containerd[1464]: time="2025-02-13T19:49:18.124270586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"38b87f59e69180d60254783ed5c95a60b14b3d22244ce074ea27e8db9c6c2785\"" Feb 13 19:49:18.126449 kubelet[2224]: E0213 19:49:18.126400 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:18.130362 containerd[1464]: time="2025-02-13T19:49:18.130335951Z" level=info msg="CreateContainer within sandbox \"38b87f59e69180d60254783ed5c95a60b14b3d22244ce074ea27e8db9c6c2785\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:49:18.132479 containerd[1464]: time="2025-02-13T19:49:18.132446889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2feeb9d46c9e6fec647a15dcaa701035,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a6a8df8a9fa91997ccde52c534252e344798e1dbce4af231f8dabcabb64bba4\"" Feb 13 19:49:18.133258 kubelet[2224]: E0213 19:49:18.133206 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:18.135903 containerd[1464]: time="2025-02-13T19:49:18.135841315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"f90c4bfaee024a02220a34540b97fca9000399b1efd070eeaa894ef86491898d\"" Feb 13 19:49:18.136537 kubelet[2224]: E0213 19:49:18.136514 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:18.137105 containerd[1464]: time="2025-02-13T19:49:18.137074287Z" level=info msg="CreateContainer within sandbox \"3a6a8df8a9fa91997ccde52c534252e344798e1dbce4af231f8dabcabb64bba4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:49:18.139930 containerd[1464]: time="2025-02-13T19:49:18.139880851Z" level=info msg="CreateContainer within sandbox \"f90c4bfaee024a02220a34540b97fca9000399b1efd070eeaa894ef86491898d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:49:18.154010 containerd[1464]: time="2025-02-13T19:49:18.153916673Z" level=info msg="CreateContainer within sandbox \"38b87f59e69180d60254783ed5c95a60b14b3d22244ce074ea27e8db9c6c2785\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fed64ca4ce75a196b1cfc581c2bac884a852f66fe6709365da3c906c19ee7b24\"" Feb 13 19:49:18.155119 containerd[1464]: time="2025-02-13T19:49:18.155083321Z" level=info msg="StartContainer for \"fed64ca4ce75a196b1cfc581c2bac884a852f66fe6709365da3c906c19ee7b24\"" Feb 13 19:49:18.163155 containerd[1464]: time="2025-02-13T19:49:18.163115314Z" level=info msg="CreateContainer within sandbox \"3a6a8df8a9fa91997ccde52c534252e344798e1dbce4af231f8dabcabb64bba4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a62d1bf83c2c1be929ff2d849c5f8adf7c3ade6a00032397a3291a243f18fd08\"" Feb 13 19:49:18.163751 containerd[1464]: time="2025-02-13T19:49:18.163728995Z" level=info msg="StartContainer for \"a62d1bf83c2c1be929ff2d849c5f8adf7c3ade6a00032397a3291a243f18fd08\"" Feb 13 19:49:18.164852 containerd[1464]: time="2025-02-13T19:49:18.164794994Z" level=info msg="CreateContainer within sandbox \"f90c4bfaee024a02220a34540b97fca9000399b1efd070eeaa894ef86491898d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c412d854a687e5888d2f1aa41e16ac245593c5047a39b03107464a530633f794\"" Feb 13 19:49:18.165730 containerd[1464]: time="2025-02-13T19:49:18.165358761Z" level=info msg="StartContainer for \"c412d854a687e5888d2f1aa41e16ac245593c5047a39b03107464a530633f794\"" Feb 13 19:49:18.182896 systemd[1]: Started cri-containerd-fed64ca4ce75a196b1cfc581c2bac884a852f66fe6709365da3c906c19ee7b24.scope - libcontainer container fed64ca4ce75a196b1cfc581c2bac884a852f66fe6709365da3c906c19ee7b24. Feb 13 19:49:18.192457 kubelet[2224]: E0213 19:49:18.192407 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="3.2s" Feb 13 19:49:18.193855 systemd[1]: Started cri-containerd-a62d1bf83c2c1be929ff2d849c5f8adf7c3ade6a00032397a3291a243f18fd08.scope - libcontainer container a62d1bf83c2c1be929ff2d849c5f8adf7c3ade6a00032397a3291a243f18fd08. Feb 13 19:49:18.197662 systemd[1]: Started cri-containerd-c412d854a687e5888d2f1aa41e16ac245593c5047a39b03107464a530633f794.scope - libcontainer container c412d854a687e5888d2f1aa41e16ac245593c5047a39b03107464a530633f794. Feb 13 19:49:18.228916 containerd[1464]: time="2025-02-13T19:49:18.228484604Z" level=info msg="StartContainer for \"fed64ca4ce75a196b1cfc581c2bac884a852f66fe6709365da3c906c19ee7b24\" returns successfully" Feb 13 19:49:18.239395 containerd[1464]: time="2025-02-13T19:49:18.239350582Z" level=info msg="StartContainer for \"a62d1bf83c2c1be929ff2d849c5f8adf7c3ade6a00032397a3291a243f18fd08\" returns successfully" Feb 13 19:49:18.244233 containerd[1464]: time="2025-02-13T19:49:18.244187162Z" level=info msg="StartContainer for \"c412d854a687e5888d2f1aa41e16ac245593c5047a39b03107464a530633f794\" returns successfully" Feb 13 19:49:18.298085 kubelet[2224]: I0213 19:49:18.298050 2224 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:49:19.217553 kubelet[2224]: E0213 19:49:19.217520 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:19.219573 kubelet[2224]: E0213 19:49:19.219437 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:19.220461 kubelet[2224]: E0213 19:49:19.220443 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:19.530513 kubelet[2224]: I0213 19:49:19.530389 2224 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:49:19.565129 kubelet[2224]: E0213 19:49:19.565080 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:19.582399 kubelet[2224]: E0213 19:49:19.581554 2224 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dc58d4ddee78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:49:15.181690488 +0000 UTC m=+0.342821223,LastTimestamp:2025-02-13 19:49:15.181690488 +0000 UTC m=+0.342821223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:49:19.665527 kubelet[2224]: E0213 19:49:19.665476 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:19.712208 kubelet[2224]: E0213 19:49:19.712031 2224 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1823dc58d54b6859 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:49:15.188865113 +0000 UTC m=+0.349995858,LastTimestamp:2025-02-13 19:49:15.188865113 +0000 UTC m=+0.349995858,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:49:19.766569 kubelet[2224]: E0213 19:49:19.766505 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:19.867419 kubelet[2224]: E0213 19:49:19.867296 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:19.968163 kubelet[2224]: E0213 19:49:19.968105 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:20.068975 kubelet[2224]: E0213 19:49:20.068919 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:20.169814 kubelet[2224]: E0213 19:49:20.169658 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:20.223115 kubelet[2224]: E0213 19:49:20.223076 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:20.223473 kubelet[2224]: E0213 19:49:20.223156 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:20.223780 kubelet[2224]: E0213 19:49:20.223709 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:20.270355 kubelet[2224]: E0213 19:49:20.270290 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:20.371011 kubelet[2224]: E0213 19:49:20.370962 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:20.471794 kubelet[2224]: E0213 19:49:20.471682 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:20.572249 kubelet[2224]: E0213 19:49:20.572204 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:20.673005 kubelet[2224]: E0213 19:49:20.672948 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:20.773751 kubelet[2224]: E0213 19:49:20.773617 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:20.874211 kubelet[2224]: E0213 19:49:20.874171 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:20.974676 kubelet[2224]: E0213 19:49:20.974618 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:21.075412 kubelet[2224]: E0213 19:49:21.075303 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:21.176022 kubelet[2224]: E0213 19:49:21.175974 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:21.224178 kubelet[2224]: E0213 19:49:21.224134 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:21.224572 kubelet[2224]: E0213 19:49:21.224432 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:21.277077 kubelet[2224]: E0213 19:49:21.277031 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:21.377748 kubelet[2224]: E0213 19:49:21.377603 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:21.478430 kubelet[2224]: E0213 19:49:21.478359 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:21.579107 kubelet[2224]: E0213 19:49:21.579045 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:21.680237 kubelet[2224]: E0213 19:49:21.680078 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:21.766777 systemd[1]: Reloading requested from client PID 2505 ('systemctl') (unit session-7.scope)... Feb 13 19:49:21.766795 systemd[1]: Reloading... Feb 13 19:49:21.780565 kubelet[2224]: E0213 19:49:21.780537 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:21.847759 zram_generator::config[2547]: No configuration found. Feb 13 19:49:21.881033 kubelet[2224]: E0213 19:49:21.880990 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:21.952594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:21.981817 kubelet[2224]: E0213 19:49:21.981773 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:22.042307 systemd[1]: Reloading finished in 275 ms. Feb 13 19:49:22.082437 kubelet[2224]: E0213 19:49:22.082398 2224 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:49:22.085431 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:22.107075 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:49:22.107312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:22.117936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:22.252857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:22.257211 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:22.295738 kubelet[2589]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:22.295738 kubelet[2589]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:22.295738 kubelet[2589]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:22.296112 kubelet[2589]: I0213 19:49:22.295777 2589 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:22.300069 kubelet[2589]: I0213 19:49:22.300043 2589 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:49:22.300069 kubelet[2589]: I0213 19:49:22.300061 2589 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:22.300202 kubelet[2589]: I0213 19:49:22.300188 2589 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:49:22.301214 kubelet[2589]: I0213 19:49:22.301193 2589 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:49:22.302265 kubelet[2589]: I0213 19:49:22.302215 2589 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:22.310900 kubelet[2589]: I0213 19:49:22.310876 2589 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:22.311147 kubelet[2589]: I0213 19:49:22.311110 2589 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:22.311312 kubelet[2589]: I0213 19:49:22.311141 2589 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:49:22.311390 kubelet[2589]: I0213 19:49:22.311327 2589 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:22.311390 kubelet[2589]: I0213 19:49:22.311337 2589 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:49:22.311390 kubelet[2589]: I0213 19:49:22.311382 2589 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:22.311493 kubelet[2589]: I0213 19:49:22.311480 2589 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:49:22.311522 kubelet[2589]: I0213 19:49:22.311509 2589 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:22.311602 kubelet[2589]: I0213 19:49:22.311531 2589 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:49:22.311602 kubelet[2589]: I0213 19:49:22.311547 2589 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:22.312060 kubelet[2589]: I0213 19:49:22.312031 2589 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:49:22.312238 kubelet[2589]: I0213 19:49:22.312196 2589 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:22.312642 kubelet[2589]: I0213 19:49:22.312617 2589 server.go:1264] "Started kubelet" Feb 13 19:49:22.312887 kubelet[2589]: I0213 19:49:22.312839 2589 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:22.313024 kubelet[2589]: I0213 19:49:22.312976 2589 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:22.313251 kubelet[2589]: I0213 19:49:22.313224 2589 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:22.313738 kubelet[2589]: I0213 19:49:22.313700 2589 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:49:22.314626 kubelet[2589]: I0213 19:49:22.314600 2589 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:22.314802 kubelet[2589]: I0213 19:49:22.314775 2589 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:49:22.316420 kubelet[2589]: I0213 19:49:22.315119 2589 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:49:22.316420 kubelet[2589]: I0213 19:49:22.315315 2589 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:22.323637 kubelet[2589]: I0213 19:49:22.323082 2589 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:22.323637 kubelet[2589]: I0213 19:49:22.323258 2589 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:22.325841 kubelet[2589]: E0213 19:49:22.325811 2589 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:49:22.328876 kubelet[2589]: I0213 19:49:22.328022 2589 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:22.337011 kubelet[2589]: I0213 19:49:22.336966 2589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:22.338492 kubelet[2589]: I0213 19:49:22.338469 2589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:22.338675 kubelet[2589]: I0213 19:49:22.338664 2589 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:49:22.338771 kubelet[2589]: I0213 19:49:22.338761 2589 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:49:22.338876 kubelet[2589]: E0213 19:49:22.338844 2589 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:49:22.358061 kubelet[2589]: I0213 19:49:22.358036 2589 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:49:22.358061 kubelet[2589]: I0213 19:49:22.358053 2589 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:22.358176 kubelet[2589]: I0213 19:49:22.358071 2589 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:22.358225 kubelet[2589]: I0213 19:49:22.358199 2589 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:49:22.358225 kubelet[2589]: I0213 19:49:22.358213 2589 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:49:22.358275 kubelet[2589]: I0213 19:49:22.358232 2589 policy_none.go:49] "None policy: Start" Feb 13 19:49:22.358765 kubelet[2589]: I0213 19:49:22.358748 2589 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:49:22.358765 kubelet[2589]: I0213 19:49:22.358767 2589 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:22.358890 kubelet[2589]: I0213 19:49:22.358875 2589 state_mem.go:75] "Updated machine memory state" Feb 13 19:49:22.363006 kubelet[2589]: I0213 19:49:22.362974 2589 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:22.363233 kubelet[2589]: I0213 19:49:22.363177 2589 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:22.363832 kubelet[2589]: I0213 19:49:22.363459 2589 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:22.419357 kubelet[2589]: I0213 19:49:22.419317 2589 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:49:22.426018 kubelet[2589]: I0213 19:49:22.425997 2589 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:49:22.426064 kubelet[2589]: I0213 19:49:22.426054 2589 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:49:22.439126 kubelet[2589]: I0213 19:49:22.439075 2589 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:49:22.439205 kubelet[2589]: I0213 19:49:22.439158 2589 topology_manager.go:215] "Topology Admit Handler" podUID="2feeb9d46c9e6fec647a15dcaa701035" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:49:22.439205 kubelet[2589]: I0213 19:49:22.439202 2589 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:49:22.617026 kubelet[2589]: I0213 19:49:22.616911 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2feeb9d46c9e6fec647a15dcaa701035-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2feeb9d46c9e6fec647a15dcaa701035\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:49:22.617026 kubelet[2589]: I0213 19:49:22.616942 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:49:22.617155 kubelet[2589]: I0213 19:49:22.617016 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:49:22.617155 kubelet[2589]: I0213 19:49:22.617060 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:49:22.617155 kubelet[2589]: I0213 19:49:22.617085 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2feeb9d46c9e6fec647a15dcaa701035-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2feeb9d46c9e6fec647a15dcaa701035\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:49:22.617155 kubelet[2589]: I0213 19:49:22.617104 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2feeb9d46c9e6fec647a15dcaa701035-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2feeb9d46c9e6fec647a15dcaa701035\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:49:22.617155 kubelet[2589]: I0213 19:49:22.617118 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:49:22.617477 kubelet[2589]: I0213 19:49:22.617161 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:49:22.617477 kubelet[2589]: I0213 19:49:22.617202 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:49:22.745861 kubelet[2589]: E0213 19:49:22.745832 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:22.746240 kubelet[2589]: E0213 19:49:22.746208 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:22.746332 kubelet[2589]: E0213 19:49:22.746306 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:23.312918 kubelet[2589]: I0213 19:49:23.312863 2589 apiserver.go:52] "Watching apiserver" Feb 13 19:49:23.315851 kubelet[2589]: I0213 19:49:23.315814 2589 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:49:23.348154 kubelet[2589]: E0213 19:49:23.347812 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:23.348154 kubelet[2589]: E0213 19:49:23.348029 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:23.465884 kubelet[2589]: E0213 19:49:23.465503 2589 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:49:23.465884 kubelet[2589]: E0213 19:49:23.465818 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:23.492920 kubelet[2589]: I0213 19:49:23.492830 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.492807964 podStartE2EDuration="1.492807964s" podCreationTimestamp="2025-02-13 19:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:23.483122518 +0000 UTC m=+1.222132577" watchObservedRunningTime="2025-02-13 19:49:23.492807964 +0000 UTC m=+1.231818013" Feb 13 19:49:23.493067 kubelet[2589]: I0213 19:49:23.493002 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.492994491 podStartE2EDuration="1.492994491s" podCreationTimestamp="2025-02-13 19:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:23.492664099 +0000 UTC m=+1.231674138" watchObservedRunningTime="2025-02-13 19:49:23.492994491 +0000 UTC m=+1.232004540" Feb 13 19:49:23.506431 kubelet[2589]: I0213 19:49:23.506375 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.506357121 podStartE2EDuration="1.506357121s" podCreationTimestamp="2025-02-13 19:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:23.499394643 +0000 UTC m=+1.238404692" watchObservedRunningTime="2025-02-13 19:49:23.506357121 +0000 UTC m=+1.245367170" Feb 13 19:49:24.349122 kubelet[2589]: E0213 19:49:24.349088 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:24.349536 kubelet[2589]: E0213 19:49:24.349455 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:24.349671 kubelet[2589]: E0213 19:49:24.349648 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:26.703950 sudo[1643]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:26.705600 sshd[1640]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:26.709887 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:48898.service: Deactivated successfully. Feb 13 19:49:26.711600 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:49:26.711804 systemd[1]: session-7.scope: Consumed 4.141s CPU time, 193.3M memory peak, 0B memory swap peak. Feb 13 19:49:26.712314 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:49:26.713361 systemd-logind[1450]: Removed session 7. Feb 13 19:49:28.864806 kubelet[2589]: E0213 19:49:28.864773 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:29.355159 kubelet[2589]: E0213 19:49:29.355028 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:31.293166 kubelet[2589]: E0213 19:49:31.293097 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:31.357678 kubelet[2589]: E0213 19:49:31.357640 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:31.996909 update_engine[1453]: I20250213 19:49:31.996805 1453 update_attempter.cc:509] Updating boot flags... Feb 13 19:49:32.022773 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2686) Feb 13 19:49:32.054931 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2688) Feb 13 19:49:32.088841 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2688) Feb 13 19:49:32.359314 kubelet[2589]: E0213 19:49:32.359195 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:33.554235 kubelet[2589]: E0213 19:49:33.554195 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:37.351813 kubelet[2589]: I0213 19:49:37.351754 2589 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:49:37.352274 containerd[1464]: time="2025-02-13T19:49:37.352128244Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:49:37.352504 kubelet[2589]: I0213 19:49:37.352316 2589 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:49:37.582670 kubelet[2589]: I0213 19:49:37.582624 2589 topology_manager.go:215] "Topology Admit Handler" podUID="689e0783-2af6-4d6b-a201-91ced05b25f9" podNamespace="kube-system" podName="kube-proxy-znpmx" Feb 13 19:49:37.589559 systemd[1]: Created slice kubepods-besteffort-pod689e0783_2af6_4d6b_a201_91ced05b25f9.slice - libcontainer container kubepods-besteffort-pod689e0783_2af6_4d6b_a201_91ced05b25f9.slice. Feb 13 19:49:37.616966 kubelet[2589]: I0213 19:49:37.616827 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/689e0783-2af6-4d6b-a201-91ced05b25f9-kube-proxy\") pod \"kube-proxy-znpmx\" (UID: \"689e0783-2af6-4d6b-a201-91ced05b25f9\") " pod="kube-system/kube-proxy-znpmx" Feb 13 19:49:37.616966 kubelet[2589]: I0213 19:49:37.616861 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/689e0783-2af6-4d6b-a201-91ced05b25f9-xtables-lock\") pod \"kube-proxy-znpmx\" (UID: \"689e0783-2af6-4d6b-a201-91ced05b25f9\") " pod="kube-system/kube-proxy-znpmx" Feb 13 19:49:37.616966 kubelet[2589]: I0213 19:49:37.616878 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/689e0783-2af6-4d6b-a201-91ced05b25f9-lib-modules\") pod \"kube-proxy-znpmx\" (UID: \"689e0783-2af6-4d6b-a201-91ced05b25f9\") " pod="kube-system/kube-proxy-znpmx" Feb 13 19:49:37.616966 kubelet[2589]: I0213 19:49:37.616894 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngm7r\" (UniqueName: \"kubernetes.io/projected/689e0783-2af6-4d6b-a201-91ced05b25f9-kube-api-access-ngm7r\") pod \"kube-proxy-znpmx\" (UID: \"689e0783-2af6-4d6b-a201-91ced05b25f9\") " pod="kube-system/kube-proxy-znpmx" Feb 13 19:49:37.903047 kubelet[2589]: E0213 19:49:37.902914 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:37.903527 kubelet[2589]: I0213 19:49:37.903497 2589 topology_manager.go:215] "Topology Admit Handler" podUID="ec2d7483-14ea-48cd-a7a7-2b31fb6f59a1" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-5w7wx" Feb 13 19:49:37.904237 containerd[1464]: time="2025-02-13T19:49:37.904198580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-znpmx,Uid:689e0783-2af6-4d6b-a201-91ced05b25f9,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:37.912749 systemd[1]: Created slice kubepods-besteffort-podec2d7483_14ea_48cd_a7a7_2b31fb6f59a1.slice - libcontainer container kubepods-besteffort-podec2d7483_14ea_48cd_a7a7_2b31fb6f59a1.slice. Feb 13 19:49:37.918879 kubelet[2589]: I0213 19:49:37.918853 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwlsg\" (UniqueName: \"kubernetes.io/projected/ec2d7483-14ea-48cd-a7a7-2b31fb6f59a1-kube-api-access-kwlsg\") pod \"tigera-operator-7bc55997bb-5w7wx\" (UID: \"ec2d7483-14ea-48cd-a7a7-2b31fb6f59a1\") " pod="tigera-operator/tigera-operator-7bc55997bb-5w7wx" Feb 13 19:49:37.919084 kubelet[2589]: I0213 19:49:37.919034 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec2d7483-14ea-48cd-a7a7-2b31fb6f59a1-var-lib-calico\") pod \"tigera-operator-7bc55997bb-5w7wx\" (UID: \"ec2d7483-14ea-48cd-a7a7-2b31fb6f59a1\") " pod="tigera-operator/tigera-operator-7bc55997bb-5w7wx" Feb 13 19:49:37.951412 containerd[1464]: time="2025-02-13T19:49:37.951303903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:37.951554 containerd[1464]: time="2025-02-13T19:49:37.951430192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:37.951554 containerd[1464]: time="2025-02-13T19:49:37.951461252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:37.952151 containerd[1464]: time="2025-02-13T19:49:37.951656451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:37.974893 systemd[1]: Started cri-containerd-a926dfcfa43df9d0e774da9065b71953d1087392c4bfdc6d6b7101257ee83f57.scope - libcontainer container a926dfcfa43df9d0e774da9065b71953d1087392c4bfdc6d6b7101257ee83f57. Feb 13 19:49:37.996420 containerd[1464]: time="2025-02-13T19:49:37.996370432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-znpmx,Uid:689e0783-2af6-4d6b-a201-91ced05b25f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a926dfcfa43df9d0e774da9065b71953d1087392c4bfdc6d6b7101257ee83f57\"" Feb 13 19:49:37.997146 kubelet[2589]: E0213 19:49:37.997123 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:37.998915 containerd[1464]: time="2025-02-13T19:49:37.998890497Z" level=info msg="CreateContainer within sandbox \"a926dfcfa43df9d0e774da9065b71953d1087392c4bfdc6d6b7101257ee83f57\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:49:38.016846 containerd[1464]: time="2025-02-13T19:49:38.016796995Z" level=info msg="CreateContainer within sandbox \"a926dfcfa43df9d0e774da9065b71953d1087392c4bfdc6d6b7101257ee83f57\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7dc2beb0e19c11bfa21ace259470c8150ebeac5ed76e1833fd8873c2ebed39bc\"" Feb 13 19:49:38.017281 containerd[1464]: time="2025-02-13T19:49:38.017254359Z" level=info msg="StartContainer for \"7dc2beb0e19c11bfa21ace259470c8150ebeac5ed76e1833fd8873c2ebed39bc\"" Feb 13 19:49:38.044880 systemd[1]: Started cri-containerd-7dc2beb0e19c11bfa21ace259470c8150ebeac5ed76e1833fd8873c2ebed39bc.scope - libcontainer container 7dc2beb0e19c11bfa21ace259470c8150ebeac5ed76e1833fd8873c2ebed39bc. Feb 13 19:49:38.074462 containerd[1464]: time="2025-02-13T19:49:38.074423745Z" level=info msg="StartContainer for \"7dc2beb0e19c11bfa21ace259470c8150ebeac5ed76e1833fd8873c2ebed39bc\" returns successfully" Feb 13 19:49:38.217255 containerd[1464]: time="2025-02-13T19:49:38.217209136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5w7wx,Uid:ec2d7483-14ea-48cd-a7a7-2b31fb6f59a1,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:49:38.241242 containerd[1464]: time="2025-02-13T19:49:38.240557896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:38.241242 containerd[1464]: time="2025-02-13T19:49:38.241192906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:38.241242 containerd[1464]: time="2025-02-13T19:49:38.241206241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:38.241455 containerd[1464]: time="2025-02-13T19:49:38.241295690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:38.260909 systemd[1]: Started cri-containerd-636be1ff2d07e507708390bd295a39e228078a688b058e9293d9cf555f16b0d3.scope - libcontainer container 636be1ff2d07e507708390bd295a39e228078a688b058e9293d9cf555f16b0d3. Feb 13 19:49:38.297179 containerd[1464]: time="2025-02-13T19:49:38.297122609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-5w7wx,Uid:ec2d7483-14ea-48cd-a7a7-2b31fb6f59a1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"636be1ff2d07e507708390bd295a39e228078a688b058e9293d9cf555f16b0d3\"" Feb 13 19:49:38.298991 containerd[1464]: time="2025-02-13T19:49:38.298952767Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:49:38.369757 kubelet[2589]: E0213 19:49:38.369461 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:38.377081 kubelet[2589]: I0213 19:49:38.376617 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-znpmx" podStartSLOduration=1.37659691 podStartE2EDuration="1.37659691s" podCreationTimestamp="2025-02-13 19:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:38.376569479 +0000 UTC m=+16.115579528" watchObservedRunningTime="2025-02-13 19:49:38.37659691 +0000 UTC m=+16.115606959" Feb 13 19:49:40.217294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1546620530.mount: Deactivated successfully. Feb 13 19:49:40.515369 containerd[1464]: time="2025-02-13T19:49:40.515253419Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:40.516278 containerd[1464]: time="2025-02-13T19:49:40.516215605Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:49:40.517542 containerd[1464]: time="2025-02-13T19:49:40.517512173Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:40.519707 containerd[1464]: time="2025-02-13T19:49:40.519655890Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:40.520395 containerd[1464]: time="2025-02-13T19:49:40.520364378Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.22136834s" Feb 13 19:49:40.520430 containerd[1464]: time="2025-02-13T19:49:40.520394846Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:49:40.522496 containerd[1464]: time="2025-02-13T19:49:40.522462518Z" level=info msg="CreateContainer within sandbox \"636be1ff2d07e507708390bd295a39e228078a688b058e9293d9cf555f16b0d3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:49:40.532394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3863689634.mount: Deactivated successfully. Feb 13 19:49:40.533485 containerd[1464]: time="2025-02-13T19:49:40.533438951Z" level=info msg="CreateContainer within sandbox \"636be1ff2d07e507708390bd295a39e228078a688b058e9293d9cf555f16b0d3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"425b2e618d2f7d924847fdcf14282fc211417d18c545628c44072f4b89070481\"" Feb 13 19:49:40.533866 containerd[1464]: time="2025-02-13T19:49:40.533839377Z" level=info msg="StartContainer for \"425b2e618d2f7d924847fdcf14282fc211417d18c545628c44072f4b89070481\"" Feb 13 19:49:40.563844 systemd[1]: Started cri-containerd-425b2e618d2f7d924847fdcf14282fc211417d18c545628c44072f4b89070481.scope - libcontainer container 425b2e618d2f7d924847fdcf14282fc211417d18c545628c44072f4b89070481. Feb 13 19:49:40.587805 containerd[1464]: time="2025-02-13T19:49:40.587765510Z" level=info msg="StartContainer for \"425b2e618d2f7d924847fdcf14282fc211417d18c545628c44072f4b89070481\" returns successfully" Feb 13 19:49:41.385638 kubelet[2589]: I0213 19:49:41.385462 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-5w7wx" podStartSLOduration=2.16272369 podStartE2EDuration="4.385448542s" podCreationTimestamp="2025-02-13 19:49:37 +0000 UTC" firstStartedPulling="2025-02-13 19:49:38.298421482 +0000 UTC m=+16.037431531" lastFinishedPulling="2025-02-13 19:49:40.521146334 +0000 UTC m=+18.260156383" observedRunningTime="2025-02-13 19:49:41.385226714 +0000 UTC m=+19.124236763" watchObservedRunningTime="2025-02-13 19:49:41.385448542 +0000 UTC m=+19.124458591" Feb 13 19:49:43.379682 kubelet[2589]: I0213 19:49:43.379608 2589 topology_manager.go:215] "Topology Admit Handler" podUID="4fa87a17-1590-439e-9f1d-b40c186e6e68" podNamespace="calico-system" podName="calico-typha-84f598b9b4-vj4kg" Feb 13 19:49:43.395406 systemd[1]: Created slice kubepods-besteffort-pod4fa87a17_1590_439e_9f1d_b40c186e6e68.slice - libcontainer container kubepods-besteffort-pod4fa87a17_1590_439e_9f1d_b40c186e6e68.slice. Feb 13 19:49:43.453569 kubelet[2589]: I0213 19:49:43.453450 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4fa87a17-1590-439e-9f1d-b40c186e6e68-typha-certs\") pod \"calico-typha-84f598b9b4-vj4kg\" (UID: \"4fa87a17-1590-439e-9f1d-b40c186e6e68\") " pod="calico-system/calico-typha-84f598b9b4-vj4kg" Feb 13 19:49:43.453569 kubelet[2589]: I0213 19:49:43.453511 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa87a17-1590-439e-9f1d-b40c186e6e68-tigera-ca-bundle\") pod \"calico-typha-84f598b9b4-vj4kg\" (UID: \"4fa87a17-1590-439e-9f1d-b40c186e6e68\") " pod="calico-system/calico-typha-84f598b9b4-vj4kg" Feb 13 19:49:43.453569 kubelet[2589]: I0213 19:49:43.453537 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkwpc\" (UniqueName: \"kubernetes.io/projected/4fa87a17-1590-439e-9f1d-b40c186e6e68-kube-api-access-rkwpc\") pod \"calico-typha-84f598b9b4-vj4kg\" (UID: \"4fa87a17-1590-439e-9f1d-b40c186e6e68\") " pod="calico-system/calico-typha-84f598b9b4-vj4kg" Feb 13 19:49:43.458144 kubelet[2589]: I0213 19:49:43.458099 2589 topology_manager.go:215] "Topology Admit Handler" podUID="ab33b586-09a3-46da-b673-914dcd67c1a0" podNamespace="calico-system" podName="calico-node-hcwrk" Feb 13 19:49:43.469071 systemd[1]: Created slice kubepods-besteffort-podab33b586_09a3_46da_b673_914dcd67c1a0.slice - libcontainer container kubepods-besteffort-podab33b586_09a3_46da_b673_914dcd67c1a0.slice. Feb 13 19:49:43.554266 kubelet[2589]: I0213 19:49:43.554221 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6v9c\" (UniqueName: \"kubernetes.io/projected/ab33b586-09a3-46da-b673-914dcd67c1a0-kube-api-access-l6v9c\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554266 kubelet[2589]: I0213 19:49:43.554276 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab33b586-09a3-46da-b673-914dcd67c1a0-tigera-ca-bundle\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554590 kubelet[2589]: I0213 19:49:43.554295 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ab33b586-09a3-46da-b673-914dcd67c1a0-node-certs\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554590 kubelet[2589]: I0213 19:49:43.554309 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab33b586-09a3-46da-b673-914dcd67c1a0-var-lib-calico\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554590 kubelet[2589]: I0213 19:49:43.554322 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab33b586-09a3-46da-b673-914dcd67c1a0-xtables-lock\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554590 kubelet[2589]: I0213 19:49:43.554336 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ab33b586-09a3-46da-b673-914dcd67c1a0-flexvol-driver-host\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554590 kubelet[2589]: I0213 19:49:43.554351 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ab33b586-09a3-46da-b673-914dcd67c1a0-policysync\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554747 kubelet[2589]: I0213 19:49:43.554379 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab33b586-09a3-46da-b673-914dcd67c1a0-lib-modules\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554747 kubelet[2589]: I0213 19:49:43.554392 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ab33b586-09a3-46da-b673-914dcd67c1a0-var-run-calico\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554747 kubelet[2589]: I0213 19:49:43.554406 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ab33b586-09a3-46da-b673-914dcd67c1a0-cni-net-dir\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554747 kubelet[2589]: I0213 19:49:43.554418 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ab33b586-09a3-46da-b673-914dcd67c1a0-cni-bin-dir\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.554747 kubelet[2589]: I0213 19:49:43.554431 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ab33b586-09a3-46da-b673-914dcd67c1a0-cni-log-dir\") pod \"calico-node-hcwrk\" (UID: \"ab33b586-09a3-46da-b673-914dcd67c1a0\") " pod="calico-system/calico-node-hcwrk" Feb 13 19:49:43.556809 kubelet[2589]: I0213 19:49:43.556542 2589 topology_manager.go:215] "Topology Admit Handler" podUID="e45413ed-22f7-42ee-a226-c017caa2ef3a" podNamespace="calico-system" podName="csi-node-driver-gx2wj" Feb 13 19:49:43.556887 kubelet[2589]: E0213 19:49:43.556814 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gx2wj" podUID="e45413ed-22f7-42ee-a226-c017caa2ef3a" Feb 13 19:49:43.655264 kubelet[2589]: I0213 19:49:43.655148 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e45413ed-22f7-42ee-a226-c017caa2ef3a-registration-dir\") pod \"csi-node-driver-gx2wj\" (UID: \"e45413ed-22f7-42ee-a226-c017caa2ef3a\") " pod="calico-system/csi-node-driver-gx2wj" Feb 13 19:49:43.656135 kubelet[2589]: I0213 19:49:43.655396 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27666\" (UniqueName: \"kubernetes.io/projected/e45413ed-22f7-42ee-a226-c017caa2ef3a-kube-api-access-27666\") pod \"csi-node-driver-gx2wj\" (UID: \"e45413ed-22f7-42ee-a226-c017caa2ef3a\") " pod="calico-system/csi-node-driver-gx2wj" Feb 13 19:49:43.656135 kubelet[2589]: I0213 19:49:43.655503 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e45413ed-22f7-42ee-a226-c017caa2ef3a-varrun\") pod \"csi-node-driver-gx2wj\" (UID: \"e45413ed-22f7-42ee-a226-c017caa2ef3a\") " pod="calico-system/csi-node-driver-gx2wj" Feb 13 19:49:43.656135 kubelet[2589]: I0213 19:49:43.655561 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e45413ed-22f7-42ee-a226-c017caa2ef3a-socket-dir\") pod \"csi-node-driver-gx2wj\" (UID: \"e45413ed-22f7-42ee-a226-c017caa2ef3a\") " pod="calico-system/csi-node-driver-gx2wj" Feb 13 19:49:43.656135 kubelet[2589]: I0213 19:49:43.655586 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e45413ed-22f7-42ee-a226-c017caa2ef3a-kubelet-dir\") pod \"csi-node-driver-gx2wj\" (UID: \"e45413ed-22f7-42ee-a226-c017caa2ef3a\") " pod="calico-system/csi-node-driver-gx2wj" Feb 13 19:49:43.659560 kubelet[2589]: E0213 19:49:43.658801 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.659560 kubelet[2589]: W0213 19:49:43.658823 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.659560 kubelet[2589]: E0213 19:49:43.658869 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.662042 kubelet[2589]: E0213 19:49:43.661224 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.662113 kubelet[2589]: W0213 19:49:43.662101 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.662177 kubelet[2589]: E0213 19:49:43.662164 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.667019 kubelet[2589]: E0213 19:49:43.666981 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.667019 kubelet[2589]: W0213 19:49:43.667006 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.667099 kubelet[2589]: E0213 19:49:43.667027 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.709671 kubelet[2589]: E0213 19:49:43.709635 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:43.710157 containerd[1464]: time="2025-02-13T19:49:43.710046874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84f598b9b4-vj4kg,Uid:4fa87a17-1590-439e-9f1d-b40c186e6e68,Namespace:calico-system,Attempt:0,}" Feb 13 19:49:43.733659 containerd[1464]: time="2025-02-13T19:49:43.733568377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:43.733659 containerd[1464]: time="2025-02-13T19:49:43.733624493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:43.733659 containerd[1464]: time="2025-02-13T19:49:43.733635734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:43.734473 containerd[1464]: time="2025-02-13T19:49:43.733733879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:43.749971 systemd[1]: Started cri-containerd-b46477c58cf215cad6e3363b1e89d53085f0984349476e2322c39832dda4df4a.scope - libcontainer container b46477c58cf215cad6e3363b1e89d53085f0984349476e2322c39832dda4df4a. Feb 13 19:49:43.757070 kubelet[2589]: E0213 19:49:43.757029 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.757070 kubelet[2589]: W0213 19:49:43.757061 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.757218 kubelet[2589]: E0213 19:49:43.757080 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.757346 kubelet[2589]: E0213 19:49:43.757323 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.757380 kubelet[2589]: W0213 19:49:43.757348 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.757380 kubelet[2589]: E0213 19:49:43.757363 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.757626 kubelet[2589]: E0213 19:49:43.757602 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.757626 kubelet[2589]: W0213 19:49:43.757622 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.757674 kubelet[2589]: E0213 19:49:43.757640 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.757881 kubelet[2589]: E0213 19:49:43.757866 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.757881 kubelet[2589]: W0213 19:49:43.757877 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.757941 kubelet[2589]: E0213 19:49:43.757890 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.758078 kubelet[2589]: E0213 19:49:43.758064 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.758078 kubelet[2589]: W0213 19:49:43.758074 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.758130 kubelet[2589]: E0213 19:49:43.758085 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.758295 kubelet[2589]: E0213 19:49:43.758280 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.758295 kubelet[2589]: W0213 19:49:43.758291 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.758358 kubelet[2589]: E0213 19:49:43.758303 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.758689 kubelet[2589]: E0213 19:49:43.758646 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.758689 kubelet[2589]: W0213 19:49:43.758659 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.758689 kubelet[2589]: E0213 19:49:43.758673 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.758986 kubelet[2589]: E0213 19:49:43.758974 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.758986 kubelet[2589]: W0213 19:49:43.758985 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.759083 kubelet[2589]: E0213 19:49:43.759059 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.759294 kubelet[2589]: E0213 19:49:43.759222 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.759294 kubelet[2589]: W0213 19:49:43.759235 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.759350 kubelet[2589]: E0213 19:49:43.759318 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.759488 kubelet[2589]: E0213 19:49:43.759473 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.759488 kubelet[2589]: W0213 19:49:43.759483 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.759843 kubelet[2589]: E0213 19:49:43.759509 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.759843 kubelet[2589]: E0213 19:49:43.759814 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.759843 kubelet[2589]: W0213 19:49:43.759823 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.759843 kubelet[2589]: E0213 19:49:43.759844 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.760115 kubelet[2589]: E0213 19:49:43.760083 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.760115 kubelet[2589]: W0213 19:49:43.760092 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.760115 kubelet[2589]: E0213 19:49:43.760103 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.760732 kubelet[2589]: E0213 19:49:43.760343 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.760732 kubelet[2589]: W0213 19:49:43.760352 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.760732 kubelet[2589]: E0213 19:49:43.760371 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.760825 kubelet[2589]: E0213 19:49:43.760763 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.760825 kubelet[2589]: W0213 19:49:43.760772 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.760869 kubelet[2589]: E0213 19:49:43.760824 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.761045 kubelet[2589]: E0213 19:49:43.761006 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.761045 kubelet[2589]: W0213 19:49:43.761016 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.761110 kubelet[2589]: E0213 19:49:43.761103 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.761247 kubelet[2589]: E0213 19:49:43.761235 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.761247 kubelet[2589]: W0213 19:49:43.761245 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.761423 kubelet[2589]: E0213 19:49:43.761340 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.761463 kubelet[2589]: E0213 19:49:43.761442 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.761463 kubelet[2589]: W0213 19:49:43.761448 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.761463 kubelet[2589]: E0213 19:49:43.761458 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.761806 kubelet[2589]: E0213 19:49:43.761765 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.761806 kubelet[2589]: W0213 19:49:43.761778 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.761867 kubelet[2589]: E0213 19:49:43.761815 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.762262 kubelet[2589]: E0213 19:49:43.762137 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.762262 kubelet[2589]: W0213 19:49:43.762163 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.762262 kubelet[2589]: E0213 19:49:43.762192 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.762702 kubelet[2589]: E0213 19:49:43.762685 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.762702 kubelet[2589]: W0213 19:49:43.762699 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.762931 kubelet[2589]: E0213 19:49:43.762912 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.763008 kubelet[2589]: E0213 19:49:43.762991 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.763008 kubelet[2589]: W0213 19:49:43.763002 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.763110 kubelet[2589]: E0213 19:49:43.763089 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.763796 kubelet[2589]: E0213 19:49:43.763783 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.763998 kubelet[2589]: W0213 19:49:43.763841 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.763998 kubelet[2589]: E0213 19:49:43.763860 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.764140 kubelet[2589]: E0213 19:49:43.764130 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.764199 kubelet[2589]: W0213 19:49:43.764188 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.764250 kubelet[2589]: E0213 19:49:43.764239 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.764479 kubelet[2589]: E0213 19:49:43.764460 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.764479 kubelet[2589]: W0213 19:49:43.764476 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.764544 kubelet[2589]: E0213 19:49:43.764486 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.764767 kubelet[2589]: E0213 19:49:43.764704 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.764767 kubelet[2589]: W0213 19:49:43.764759 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.764853 kubelet[2589]: E0213 19:49:43.764770 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.766061 kubelet[2589]: E0213 19:49:43.766041 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:43.766061 kubelet[2589]: W0213 19:49:43.766053 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:43.766130 kubelet[2589]: E0213 19:49:43.766064 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:43.772268 kubelet[2589]: E0213 19:49:43.772230 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:43.773221 containerd[1464]: time="2025-02-13T19:49:43.773188297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hcwrk,Uid:ab33b586-09a3-46da-b673-914dcd67c1a0,Namespace:calico-system,Attempt:0,}" Feb 13 19:49:43.788031 containerd[1464]: time="2025-02-13T19:49:43.787949842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84f598b9b4-vj4kg,Uid:4fa87a17-1590-439e-9f1d-b40c186e6e68,Namespace:calico-system,Attempt:0,} returns sandbox id \"b46477c58cf215cad6e3363b1e89d53085f0984349476e2322c39832dda4df4a\"" Feb 13 19:49:43.788619 kubelet[2589]: E0213 19:49:43.788594 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:43.789539 containerd[1464]: time="2025-02-13T19:49:43.789365221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:49:43.796295 containerd[1464]: time="2025-02-13T19:49:43.796218002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:43.796295 containerd[1464]: time="2025-02-13T19:49:43.796274950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:43.797180 containerd[1464]: time="2025-02-13T19:49:43.796807323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:43.797671 containerd[1464]: time="2025-02-13T19:49:43.797254066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:43.812843 systemd[1]: Started cri-containerd-1904594a43ccf16b003589322225f0c0e1ba8a920bef5ee11577f03c20af3625.scope - libcontainer container 1904594a43ccf16b003589322225f0c0e1ba8a920bef5ee11577f03c20af3625. Feb 13 19:49:43.834689 containerd[1464]: time="2025-02-13T19:49:43.834628391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hcwrk,Uid:ab33b586-09a3-46da-b673-914dcd67c1a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"1904594a43ccf16b003589322225f0c0e1ba8a920bef5ee11577f03c20af3625\"" Feb 13 19:49:43.835246 kubelet[2589]: E0213 19:49:43.835226 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:45.340025 kubelet[2589]: E0213 19:49:45.339982 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gx2wj" podUID="e45413ed-22f7-42ee-a226-c017caa2ef3a" Feb 13 19:49:45.470865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925522305.mount: Deactivated successfully. Feb 13 19:49:45.808811 containerd[1464]: time="2025-02-13T19:49:45.808751791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:45.809566 containerd[1464]: time="2025-02-13T19:49:45.809495142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 19:49:45.810606 containerd[1464]: time="2025-02-13T19:49:45.810569637Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:45.812636 containerd[1464]: time="2025-02-13T19:49:45.812599441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:45.813241 containerd[1464]: time="2025-02-13T19:49:45.813202488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.023810897s" Feb 13 19:49:45.813274 containerd[1464]: time="2025-02-13T19:49:45.813240660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:49:45.814231 containerd[1464]: time="2025-02-13T19:49:45.814198355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:49:45.821066 containerd[1464]: time="2025-02-13T19:49:45.821025670Z" level=info msg="CreateContainer within sandbox \"b46477c58cf215cad6e3363b1e89d53085f0984349476e2322c39832dda4df4a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:49:45.833912 containerd[1464]: time="2025-02-13T19:49:45.833880093Z" level=info msg="CreateContainer within sandbox \"b46477c58cf215cad6e3363b1e89d53085f0984349476e2322c39832dda4df4a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c1b37010a9bc16da587832c860516ff1f4ede3a102f73a5a9c328648ebd11bc1\"" Feb 13 19:49:45.834530 containerd[1464]: time="2025-02-13T19:49:45.834329359Z" level=info msg="StartContainer for \"c1b37010a9bc16da587832c860516ff1f4ede3a102f73a5a9c328648ebd11bc1\"" Feb 13 19:49:45.862847 systemd[1]: Started cri-containerd-c1b37010a9bc16da587832c860516ff1f4ede3a102f73a5a9c328648ebd11bc1.scope - libcontainer container c1b37010a9bc16da587832c860516ff1f4ede3a102f73a5a9c328648ebd11bc1. Feb 13 19:49:45.902386 containerd[1464]: time="2025-02-13T19:49:45.902342804Z" level=info msg="StartContainer for \"c1b37010a9bc16da587832c860516ff1f4ede3a102f73a5a9c328648ebd11bc1\" returns successfully" Feb 13 19:49:46.387149 kubelet[2589]: E0213 19:49:46.387114 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:46.396359 kubelet[2589]: I0213 19:49:46.396226 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84f598b9b4-vj4kg" podStartSLOduration=1.371193115 podStartE2EDuration="3.396214333s" podCreationTimestamp="2025-02-13 19:49:43 +0000 UTC" firstStartedPulling="2025-02-13 19:49:43.789029117 +0000 UTC m=+21.528039166" lastFinishedPulling="2025-02-13 19:49:45.814050335 +0000 UTC m=+23.553060384" observedRunningTime="2025-02-13 19:49:46.396078547 +0000 UTC m=+24.135088596" watchObservedRunningTime="2025-02-13 19:49:46.396214333 +0000 UTC m=+24.135224382" Feb 13 19:49:46.469377 kubelet[2589]: E0213 19:49:46.469334 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.469377 kubelet[2589]: W0213 19:49:46.469358 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.469461 kubelet[2589]: E0213 19:49:46.469378 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.469659 kubelet[2589]: E0213 19:49:46.469642 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.469659 kubelet[2589]: W0213 19:49:46.469652 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.469659 kubelet[2589]: E0213 19:49:46.469660 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.469885 kubelet[2589]: E0213 19:49:46.469871 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.469885 kubelet[2589]: W0213 19:49:46.469882 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.469885 kubelet[2589]: E0213 19:49:46.469890 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.470094 kubelet[2589]: E0213 19:49:46.470080 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.470094 kubelet[2589]: W0213 19:49:46.470091 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.470160 kubelet[2589]: E0213 19:49:46.470099 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.470359 kubelet[2589]: E0213 19:49:46.470334 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.470359 kubelet[2589]: W0213 19:49:46.470352 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.470505 kubelet[2589]: E0213 19:49:46.470373 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.470632 kubelet[2589]: E0213 19:49:46.470609 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.470632 kubelet[2589]: W0213 19:49:46.470620 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.470632 kubelet[2589]: E0213 19:49:46.470629 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.470843 kubelet[2589]: E0213 19:49:46.470829 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.470843 kubelet[2589]: W0213 19:49:46.470836 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.470893 kubelet[2589]: E0213 19:49:46.470844 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.471069 kubelet[2589]: E0213 19:49:46.471055 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.471069 kubelet[2589]: W0213 19:49:46.471065 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.471124 kubelet[2589]: E0213 19:49:46.471073 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.471300 kubelet[2589]: E0213 19:49:46.471277 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.471300 kubelet[2589]: W0213 19:49:46.471289 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.471300 kubelet[2589]: E0213 19:49:46.471297 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.471532 kubelet[2589]: E0213 19:49:46.471514 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.471532 kubelet[2589]: W0213 19:49:46.471530 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.471595 kubelet[2589]: E0213 19:49:46.471546 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.471822 kubelet[2589]: E0213 19:49:46.471799 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.471822 kubelet[2589]: W0213 19:49:46.471811 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.471822 kubelet[2589]: E0213 19:49:46.471819 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.472026 kubelet[2589]: E0213 19:49:46.472009 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.472026 kubelet[2589]: W0213 19:49:46.472023 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.472075 kubelet[2589]: E0213 19:49:46.472033 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.472237 kubelet[2589]: E0213 19:49:46.472224 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.472237 kubelet[2589]: W0213 19:49:46.472233 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.472285 kubelet[2589]: E0213 19:49:46.472242 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.472448 kubelet[2589]: E0213 19:49:46.472433 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.472448 kubelet[2589]: W0213 19:49:46.472444 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.472497 kubelet[2589]: E0213 19:49:46.472452 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.472651 kubelet[2589]: E0213 19:49:46.472637 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.472682 kubelet[2589]: W0213 19:49:46.472657 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.472682 kubelet[2589]: E0213 19:49:46.472666 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.478013 kubelet[2589]: E0213 19:49:46.477986 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.478013 kubelet[2589]: W0213 19:49:46.478006 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.478080 kubelet[2589]: E0213 19:49:46.478025 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.478296 kubelet[2589]: E0213 19:49:46.478275 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.478296 kubelet[2589]: W0213 19:49:46.478285 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.478353 kubelet[2589]: E0213 19:49:46.478299 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.478538 kubelet[2589]: E0213 19:49:46.478514 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.478538 kubelet[2589]: W0213 19:49:46.478529 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.478593 kubelet[2589]: E0213 19:49:46.478545 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.478769 kubelet[2589]: E0213 19:49:46.478753 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.478769 kubelet[2589]: W0213 19:49:46.478767 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.478824 kubelet[2589]: E0213 19:49:46.478785 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.479022 kubelet[2589]: E0213 19:49:46.479009 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.479022 kubelet[2589]: W0213 19:49:46.479019 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.479076 kubelet[2589]: E0213 19:49:46.479032 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.479240 kubelet[2589]: E0213 19:49:46.479224 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.479240 kubelet[2589]: W0213 19:49:46.479236 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.479291 kubelet[2589]: E0213 19:49:46.479252 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.479463 kubelet[2589]: E0213 19:49:46.479450 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.479463 kubelet[2589]: W0213 19:49:46.479460 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.479508 kubelet[2589]: E0213 19:49:46.479473 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.479678 kubelet[2589]: E0213 19:49:46.479665 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.479678 kubelet[2589]: W0213 19:49:46.479675 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.479743 kubelet[2589]: E0213 19:49:46.479688 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.479925 kubelet[2589]: E0213 19:49:46.479911 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.479925 kubelet[2589]: W0213 19:49:46.479922 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.479976 kubelet[2589]: E0213 19:49:46.479959 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.480105 kubelet[2589]: E0213 19:49:46.480091 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.480105 kubelet[2589]: W0213 19:49:46.480102 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.480156 kubelet[2589]: E0213 19:49:46.480129 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.480285 kubelet[2589]: E0213 19:49:46.480272 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.480285 kubelet[2589]: W0213 19:49:46.480281 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.480333 kubelet[2589]: E0213 19:49:46.480295 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.480478 kubelet[2589]: E0213 19:49:46.480465 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.480478 kubelet[2589]: W0213 19:49:46.480475 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.480522 kubelet[2589]: E0213 19:49:46.480488 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.480705 kubelet[2589]: E0213 19:49:46.480690 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.480705 kubelet[2589]: W0213 19:49:46.480701 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.480778 kubelet[2589]: E0213 19:49:46.480742 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.480944 kubelet[2589]: E0213 19:49:46.480930 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.480944 kubelet[2589]: W0213 19:49:46.480940 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.480993 kubelet[2589]: E0213 19:49:46.480953 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.481197 kubelet[2589]: E0213 19:49:46.481176 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.481197 kubelet[2589]: W0213 19:49:46.481191 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.481275 kubelet[2589]: E0213 19:49:46.481203 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.481642 kubelet[2589]: E0213 19:49:46.481619 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.481642 kubelet[2589]: W0213 19:49:46.481631 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.481642 kubelet[2589]: E0213 19:49:46.481640 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.482516 kubelet[2589]: E0213 19:49:46.482493 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.482516 kubelet[2589]: W0213 19:49:46.482508 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.482516 kubelet[2589]: E0213 19:49:46.482518 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:46.482995 kubelet[2589]: E0213 19:49:46.482974 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:46.482995 kubelet[2589]: W0213 19:49:46.482986 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:46.482995 kubelet[2589]: E0213 19:49:46.482994 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.339263 kubelet[2589]: E0213 19:49:47.339220 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gx2wj" podUID="e45413ed-22f7-42ee-a226-c017caa2ef3a" Feb 13 19:49:47.387759 kubelet[2589]: I0213 19:49:47.387728 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:49:47.388333 kubelet[2589]: E0213 19:49:47.388236 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:47.403268 containerd[1464]: time="2025-02-13T19:49:47.403221635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:47.403940 containerd[1464]: time="2025-02-13T19:49:47.403906495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 19:49:47.405069 containerd[1464]: time="2025-02-13T19:49:47.405045370Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:47.407028 containerd[1464]: time="2025-02-13T19:49:47.406998178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:47.407509 containerd[1464]: time="2025-02-13T19:49:47.407482981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.593248158s" Feb 13 19:49:47.407541 containerd[1464]: time="2025-02-13T19:49:47.407510312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:49:47.409461 containerd[1464]: time="2025-02-13T19:49:47.409431761Z" level=info msg="CreateContainer within sandbox \"1904594a43ccf16b003589322225f0c0e1ba8a920bef5ee11577f03c20af3625\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:49:47.421840 containerd[1464]: time="2025-02-13T19:49:47.421788823Z" level=info msg="CreateContainer within sandbox \"1904594a43ccf16b003589322225f0c0e1ba8a920bef5ee11577f03c20af3625\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"adc88b55657ed1af1b6a1f5c6bc54a111bec22fb976dd0216fd63f9c77de810b\"" Feb 13 19:49:47.422333 containerd[1464]: time="2025-02-13T19:49:47.422301378Z" level=info msg="StartContainer for \"adc88b55657ed1af1b6a1f5c6bc54a111bec22fb976dd0216fd63f9c77de810b\"" Feb 13 19:49:47.452851 systemd[1]: Started cri-containerd-adc88b55657ed1af1b6a1f5c6bc54a111bec22fb976dd0216fd63f9c77de810b.scope - libcontainer container adc88b55657ed1af1b6a1f5c6bc54a111bec22fb976dd0216fd63f9c77de810b. Feb 13 19:49:47.477242 kubelet[2589]: E0213 19:49:47.477206 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.477242 kubelet[2589]: W0213 19:49:47.477226 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.477242 kubelet[2589]: E0213 19:49:47.477243 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.477962 kubelet[2589]: E0213 19:49:47.477442 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.477962 kubelet[2589]: W0213 19:49:47.477452 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.477962 kubelet[2589]: E0213 19:49:47.477460 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.477962 kubelet[2589]: E0213 19:49:47.477640 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.477962 kubelet[2589]: W0213 19:49:47.477647 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.477962 kubelet[2589]: E0213 19:49:47.477664 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.477962 kubelet[2589]: E0213 19:49:47.477857 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.477962 kubelet[2589]: W0213 19:49:47.477864 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.477962 kubelet[2589]: E0213 19:49:47.477871 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.478490 kubelet[2589]: E0213 19:49:47.478299 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.478490 kubelet[2589]: W0213 19:49:47.478309 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.478490 kubelet[2589]: E0213 19:49:47.478317 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.478643 kubelet[2589]: E0213 19:49:47.478612 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.478699 kubelet[2589]: W0213 19:49:47.478687 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.478830 kubelet[2589]: E0213 19:49:47.478738 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.479116 kubelet[2589]: E0213 19:49:47.479050 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.479116 kubelet[2589]: W0213 19:49:47.479060 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.479116 kubelet[2589]: E0213 19:49:47.479069 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.479508 kubelet[2589]: E0213 19:49:47.479387 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.479508 kubelet[2589]: W0213 19:49:47.479396 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.479508 kubelet[2589]: E0213 19:49:47.479404 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.479704 kubelet[2589]: E0213 19:49:47.479693 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.479857 kubelet[2589]: W0213 19:49:47.479735 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.479857 kubelet[2589]: E0213 19:49:47.479745 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.480067 kubelet[2589]: E0213 19:49:47.479973 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.480067 kubelet[2589]: W0213 19:49:47.479983 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.480067 kubelet[2589]: E0213 19:49:47.479992 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.480230 kubelet[2589]: E0213 19:49:47.480219 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.480297 kubelet[2589]: W0213 19:49:47.480275 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.480297 kubelet[2589]: E0213 19:49:47.480291 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.480514 kubelet[2589]: E0213 19:49:47.480497 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.480514 kubelet[2589]: W0213 19:49:47.480508 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.480514 kubelet[2589]: E0213 19:49:47.480516 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.480765 kubelet[2589]: E0213 19:49:47.480747 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.480765 kubelet[2589]: W0213 19:49:47.480759 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.480765 kubelet[2589]: E0213 19:49:47.480768 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.480997 kubelet[2589]: E0213 19:49:47.480969 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.480997 kubelet[2589]: W0213 19:49:47.480980 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.480997 kubelet[2589]: E0213 19:49:47.480988 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.481239 kubelet[2589]: E0213 19:49:47.481212 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.481239 kubelet[2589]: W0213 19:49:47.481222 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.481239 kubelet[2589]: E0213 19:49:47.481233 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.483335 containerd[1464]: time="2025-02-13T19:49:47.483295379Z" level=info msg="StartContainer for \"adc88b55657ed1af1b6a1f5c6bc54a111bec22fb976dd0216fd63f9c77de810b\" returns successfully" Feb 13 19:49:47.485222 kubelet[2589]: E0213 19:49:47.485184 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.485222 kubelet[2589]: W0213 19:49:47.485197 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.485316 kubelet[2589]: E0213 19:49:47.485207 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.485705 kubelet[2589]: E0213 19:49:47.485489 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.485705 kubelet[2589]: W0213 19:49:47.485510 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.485705 kubelet[2589]: E0213 19:49:47.485534 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.485916 kubelet[2589]: E0213 19:49:47.485901 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.485979 kubelet[2589]: W0213 19:49:47.485914 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.485979 kubelet[2589]: E0213 19:49:47.485959 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.486399 kubelet[2589]: E0213 19:49:47.486377 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.486399 kubelet[2589]: W0213 19:49:47.486390 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.486493 kubelet[2589]: E0213 19:49:47.486402 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.486740 kubelet[2589]: E0213 19:49:47.486706 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.486772 kubelet[2589]: W0213 19:49:47.486740 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.486772 kubelet[2589]: E0213 19:49:47.486757 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.487072 kubelet[2589]: E0213 19:49:47.486984 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.487072 kubelet[2589]: W0213 19:49:47.486994 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.487072 kubelet[2589]: E0213 19:49:47.487041 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.487340 kubelet[2589]: E0213 19:49:47.487317 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.487416 kubelet[2589]: W0213 19:49:47.487388 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.487504 kubelet[2589]: E0213 19:49:47.487476 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.487675 kubelet[2589]: E0213 19:49:47.487649 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.487675 kubelet[2589]: W0213 19:49:47.487657 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.488016 kubelet[2589]: E0213 19:49:47.487786 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.488016 kubelet[2589]: E0213 19:49:47.487865 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.488016 kubelet[2589]: W0213 19:49:47.487873 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.488016 kubelet[2589]: E0213 19:49:47.487887 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.488433 kubelet[2589]: E0213 19:49:47.488281 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.488433 kubelet[2589]: W0213 19:49:47.488290 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.488433 kubelet[2589]: E0213 19:49:47.488311 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.488903 kubelet[2589]: E0213 19:49:47.488507 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.488903 kubelet[2589]: W0213 19:49:47.488515 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.488903 kubelet[2589]: E0213 19:49:47.488536 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.488903 kubelet[2589]: E0213 19:49:47.488775 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.488903 kubelet[2589]: W0213 19:49:47.488782 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.488903 kubelet[2589]: E0213 19:49:47.488792 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.489238 kubelet[2589]: E0213 19:49:47.489070 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.489238 kubelet[2589]: W0213 19:49:47.489078 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.489238 kubelet[2589]: E0213 19:49:47.489130 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.489905 kubelet[2589]: E0213 19:49:47.489556 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.489905 kubelet[2589]: W0213 19:49:47.489570 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.489905 kubelet[2589]: E0213 19:49:47.489579 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.489905 kubelet[2589]: E0213 19:49:47.489848 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.489905 kubelet[2589]: W0213 19:49:47.489856 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.489905 kubelet[2589]: E0213 19:49:47.489870 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.490478 kubelet[2589]: E0213 19:49:47.490045 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.490478 kubelet[2589]: W0213 19:49:47.490053 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.490478 kubelet[2589]: E0213 19:49:47.490064 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.490478 kubelet[2589]: E0213 19:49:47.490274 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.490478 kubelet[2589]: W0213 19:49:47.490282 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.490478 kubelet[2589]: E0213 19:49:47.490289 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.490478 kubelet[2589]: E0213 19:49:47.490455 2589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:49:47.490478 kubelet[2589]: W0213 19:49:47.490461 2589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:49:47.490478 kubelet[2589]: E0213 19:49:47.490468 2589 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:49:47.498433 systemd[1]: cri-containerd-adc88b55657ed1af1b6a1f5c6bc54a111bec22fb976dd0216fd63f9c77de810b.scope: Deactivated successfully. Feb 13 19:49:47.737889 containerd[1464]: time="2025-02-13T19:49:47.735501550Z" level=info msg="shim disconnected" id=adc88b55657ed1af1b6a1f5c6bc54a111bec22fb976dd0216fd63f9c77de810b namespace=k8s.io Feb 13 19:49:47.737889 containerd[1464]: time="2025-02-13T19:49:47.737881021Z" level=warning msg="cleaning up after shim disconnected" id=adc88b55657ed1af1b6a1f5c6bc54a111bec22fb976dd0216fd63f9c77de810b namespace=k8s.io Feb 13 19:49:47.737889 containerd[1464]: time="2025-02-13T19:49:47.737892222Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:47.818990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adc88b55657ed1af1b6a1f5c6bc54a111bec22fb976dd0216fd63f9c77de810b-rootfs.mount: Deactivated successfully. Feb 13 19:49:48.390889 kubelet[2589]: E0213 19:49:48.390861 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:48.391406 containerd[1464]: time="2025-02-13T19:49:48.391369135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:49:49.339595 kubelet[2589]: E0213 19:49:49.339545 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gx2wj" podUID="e45413ed-22f7-42ee-a226-c017caa2ef3a" Feb 13 19:49:51.339224 kubelet[2589]: E0213 19:49:51.339175 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gx2wj" podUID="e45413ed-22f7-42ee-a226-c017caa2ef3a" Feb 13 19:49:52.196395 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:41000.service - OpenSSH per-connection server daemon (10.0.0.1:41000). Feb 13 19:49:52.465356 sshd[3295]: Accepted publickey for core from 10.0.0.1 port 41000 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:49:52.467006 sshd[3295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:52.471208 systemd-logind[1450]: New session 8 of user core. Feb 13 19:49:52.480876 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:49:52.600043 sshd[3295]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:52.605916 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:41000.service: Deactivated successfully. Feb 13 19:49:52.608225 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:49:52.610491 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:49:52.611650 systemd-logind[1450]: Removed session 8. Feb 13 19:49:53.343086 kubelet[2589]: E0213 19:49:53.343040 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gx2wj" podUID="e45413ed-22f7-42ee-a226-c017caa2ef3a" Feb 13 19:49:53.516505 containerd[1464]: time="2025-02-13T19:49:53.516453452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:53.517345 containerd[1464]: time="2025-02-13T19:49:53.517277292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:49:53.518330 containerd[1464]: time="2025-02-13T19:49:53.518301458Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:53.520374 containerd[1464]: time="2025-02-13T19:49:53.520321688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:53.521025 containerd[1464]: time="2025-02-13T19:49:53.520986679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.129439107s" Feb 13 19:49:53.521025 containerd[1464]: time="2025-02-13T19:49:53.521020292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:49:53.541746 containerd[1464]: time="2025-02-13T19:49:53.541683894Z" level=info msg="CreateContainer within sandbox \"1904594a43ccf16b003589322225f0c0e1ba8a920bef5ee11577f03c20af3625\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:49:53.554675 containerd[1464]: time="2025-02-13T19:49:53.554618761Z" level=info msg="CreateContainer within sandbox \"1904594a43ccf16b003589322225f0c0e1ba8a920bef5ee11577f03c20af3625\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8ab2794249c5b1bd4f18c4b82dfa1e37aede85a2f0a23770d7f303e096f51705\"" Feb 13 19:49:53.557117 containerd[1464]: time="2025-02-13T19:49:53.557088116Z" level=info msg="StartContainer for \"8ab2794249c5b1bd4f18c4b82dfa1e37aede85a2f0a23770d7f303e096f51705\"" Feb 13 19:49:53.587907 systemd[1]: Started cri-containerd-8ab2794249c5b1bd4f18c4b82dfa1e37aede85a2f0a23770d7f303e096f51705.scope - libcontainer container 8ab2794249c5b1bd4f18c4b82dfa1e37aede85a2f0a23770d7f303e096f51705. Feb 13 19:49:53.821794 containerd[1464]: time="2025-02-13T19:49:53.821731886Z" level=info msg="StartContainer for \"8ab2794249c5b1bd4f18c4b82dfa1e37aede85a2f0a23770d7f303e096f51705\" returns successfully" Feb 13 19:49:54.549834 kubelet[2589]: E0213 19:49:54.549803 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:54.707347 systemd[1]: cri-containerd-8ab2794249c5b1bd4f18c4b82dfa1e37aede85a2f0a23770d7f303e096f51705.scope: Deactivated successfully. Feb 13 19:49:54.726548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ab2794249c5b1bd4f18c4b82dfa1e37aede85a2f0a23770d7f303e096f51705-rootfs.mount: Deactivated successfully. Feb 13 19:49:54.729211 containerd[1464]: time="2025-02-13T19:49:54.729150672Z" level=info msg="shim disconnected" id=8ab2794249c5b1bd4f18c4b82dfa1e37aede85a2f0a23770d7f303e096f51705 namespace=k8s.io Feb 13 19:49:54.729211 containerd[1464]: time="2025-02-13T19:49:54.729202298Z" level=warning msg="cleaning up after shim disconnected" id=8ab2794249c5b1bd4f18c4b82dfa1e37aede85a2f0a23770d7f303e096f51705 namespace=k8s.io Feb 13 19:49:54.729211 containerd[1464]: time="2025-02-13T19:49:54.729211487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:54.748505 kubelet[2589]: I0213 19:49:54.748477 2589 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:49:54.766808 kubelet[2589]: I0213 19:49:54.766633 2589 topology_manager.go:215] "Topology Admit Handler" podUID="92ce11ad-0c53-4174-8895-91b95bbb2b8b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-27zq4" Feb 13 19:49:54.767635 kubelet[2589]: I0213 19:49:54.767527 2589 topology_manager.go:215] "Topology Admit Handler" podUID="e18d693f-e3ac-4db7-8c9c-6652e5baff8f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6wc8l" Feb 13 19:49:54.769179 kubelet[2589]: I0213 19:49:54.769140 2589 topology_manager.go:215] "Topology Admit Handler" podUID="63834bf5-a120-4a06-bb8c-91897696367c" podNamespace="calico-system" podName="calico-kube-controllers-7899dc9d6d-55s42" Feb 13 19:49:54.770754 kubelet[2589]: I0213 19:49:54.770701 2589 topology_manager.go:215] "Topology Admit Handler" podUID="e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24" podNamespace="calico-apiserver" podName="calico-apiserver-57797d99cf-hqvzt" Feb 13 19:49:54.771053 kubelet[2589]: I0213 19:49:54.771027 2589 topology_manager.go:215] "Topology Admit Handler" podUID="607000e1-6cc7-4c34-945a-f49ad59d4c78" podNamespace="calico-apiserver" podName="calico-apiserver-57797d99cf-j56gq" Feb 13 19:49:54.777761 systemd[1]: Created slice kubepods-burstable-pod92ce11ad_0c53_4174_8895_91b95bbb2b8b.slice - libcontainer container kubepods-burstable-pod92ce11ad_0c53_4174_8895_91b95bbb2b8b.slice. Feb 13 19:49:54.780816 systemd[1]: Created slice kubepods-burstable-pode18d693f_e3ac_4db7_8c9c_6652e5baff8f.slice - libcontainer container kubepods-burstable-pode18d693f_e3ac_4db7_8c9c_6652e5baff8f.slice. Feb 13 19:49:54.785264 systemd[1]: Created slice kubepods-besteffort-pod63834bf5_a120_4a06_bb8c_91897696367c.slice - libcontainer container kubepods-besteffort-pod63834bf5_a120_4a06_bb8c_91897696367c.slice. Feb 13 19:49:54.791187 systemd[1]: Created slice kubepods-besteffort-pode8cf9ce0_cc8d_4deb_a6e9_a97d42930d24.slice - libcontainer container kubepods-besteffort-pode8cf9ce0_cc8d_4deb_a6e9_a97d42930d24.slice. Feb 13 19:49:54.796243 systemd[1]: Created slice kubepods-besteffort-pod607000e1_6cc7_4c34_945a_f49ad59d4c78.slice - libcontainer container kubepods-besteffort-pod607000e1_6cc7_4c34_945a_f49ad59d4c78.slice. Feb 13 19:49:54.940549 kubelet[2589]: I0213 19:49:54.940406 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e18d693f-e3ac-4db7-8c9c-6652e5baff8f-config-volume\") pod \"coredns-7db6d8ff4d-6wc8l\" (UID: \"e18d693f-e3ac-4db7-8c9c-6652e5baff8f\") " pod="kube-system/coredns-7db6d8ff4d-6wc8l" Feb 13 19:49:54.940549 kubelet[2589]: I0213 19:49:54.940456 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6x9h\" (UniqueName: \"kubernetes.io/projected/e18d693f-e3ac-4db7-8c9c-6652e5baff8f-kube-api-access-p6x9h\") pod \"coredns-7db6d8ff4d-6wc8l\" (UID: \"e18d693f-e3ac-4db7-8c9c-6652e5baff8f\") " pod="kube-system/coredns-7db6d8ff4d-6wc8l" Feb 13 19:49:54.940549 kubelet[2589]: I0213 19:49:54.940474 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sjrs\" (UniqueName: \"kubernetes.io/projected/63834bf5-a120-4a06-bb8c-91897696367c-kube-api-access-5sjrs\") pod \"calico-kube-controllers-7899dc9d6d-55s42\" (UID: \"63834bf5-a120-4a06-bb8c-91897696367c\") " pod="calico-system/calico-kube-controllers-7899dc9d6d-55s42" Feb 13 19:49:54.940549 kubelet[2589]: I0213 19:49:54.940490 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwhdn\" (UniqueName: \"kubernetes.io/projected/e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24-kube-api-access-kwhdn\") pod \"calico-apiserver-57797d99cf-hqvzt\" (UID: \"e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24\") " pod="calico-apiserver/calico-apiserver-57797d99cf-hqvzt" Feb 13 19:49:54.940549 kubelet[2589]: I0213 19:49:54.940509 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24-calico-apiserver-certs\") pod \"calico-apiserver-57797d99cf-hqvzt\" (UID: \"e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24\") " pod="calico-apiserver/calico-apiserver-57797d99cf-hqvzt" Feb 13 19:49:54.940833 kubelet[2589]: I0213 19:49:54.940563 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq9l4\" (UniqueName: \"kubernetes.io/projected/607000e1-6cc7-4c34-945a-f49ad59d4c78-kube-api-access-sq9l4\") pod \"calico-apiserver-57797d99cf-j56gq\" (UID: \"607000e1-6cc7-4c34-945a-f49ad59d4c78\") " pod="calico-apiserver/calico-apiserver-57797d99cf-j56gq" Feb 13 19:49:54.940833 kubelet[2589]: I0213 19:49:54.940587 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/607000e1-6cc7-4c34-945a-f49ad59d4c78-calico-apiserver-certs\") pod \"calico-apiserver-57797d99cf-j56gq\" (UID: \"607000e1-6cc7-4c34-945a-f49ad59d4c78\") " pod="calico-apiserver/calico-apiserver-57797d99cf-j56gq" Feb 13 19:49:54.940833 kubelet[2589]: I0213 19:49:54.940614 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x265p\" (UniqueName: \"kubernetes.io/projected/92ce11ad-0c53-4174-8895-91b95bbb2b8b-kube-api-access-x265p\") pod \"coredns-7db6d8ff4d-27zq4\" (UID: \"92ce11ad-0c53-4174-8895-91b95bbb2b8b\") " pod="kube-system/coredns-7db6d8ff4d-27zq4" Feb 13 19:49:54.940833 kubelet[2589]: I0213 19:49:54.940646 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92ce11ad-0c53-4174-8895-91b95bbb2b8b-config-volume\") pod \"coredns-7db6d8ff4d-27zq4\" (UID: \"92ce11ad-0c53-4174-8895-91b95bbb2b8b\") " pod="kube-system/coredns-7db6d8ff4d-27zq4" Feb 13 19:49:54.940833 kubelet[2589]: I0213 19:49:54.940671 2589 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63834bf5-a120-4a06-bb8c-91897696367c-tigera-ca-bundle\") pod \"calico-kube-controllers-7899dc9d6d-55s42\" (UID: \"63834bf5-a120-4a06-bb8c-91897696367c\") " pod="calico-system/calico-kube-controllers-7899dc9d6d-55s42" Feb 13 19:49:55.083220 kubelet[2589]: E0213 19:49:55.083177 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:55.083892 containerd[1464]: time="2025-02-13T19:49:55.083813850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-27zq4,Uid:92ce11ad-0c53-4174-8895-91b95bbb2b8b,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:55.085175 kubelet[2589]: E0213 19:49:55.085148 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:55.085693 containerd[1464]: time="2025-02-13T19:49:55.085655834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6wc8l,Uid:e18d693f-e3ac-4db7-8c9c-6652e5baff8f,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:55.088601 containerd[1464]: time="2025-02-13T19:49:55.088565655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7899dc9d6d-55s42,Uid:63834bf5-a120-4a06-bb8c-91897696367c,Namespace:calico-system,Attempt:0,}" Feb 13 19:49:55.094239 containerd[1464]: time="2025-02-13T19:49:55.094193896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57797d99cf-hqvzt,Uid:e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:49:55.099551 containerd[1464]: time="2025-02-13T19:49:55.099471529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57797d99cf-j56gq,Uid:607000e1-6cc7-4c34-945a-f49ad59d4c78,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:49:55.204044 containerd[1464]: time="2025-02-13T19:49:55.203982099Z" level=error msg="Failed to destroy network for sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.204386 containerd[1464]: time="2025-02-13T19:49:55.203984623Z" level=error msg="Failed to destroy network for sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.204835 containerd[1464]: time="2025-02-13T19:49:55.204811378Z" level=error msg="encountered an error cleaning up failed sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.204940 containerd[1464]: time="2025-02-13T19:49:55.204920363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-27zq4,Uid:92ce11ad-0c53-4174-8895-91b95bbb2b8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.205688 kubelet[2589]: E0213 19:49:55.205332 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.205688 kubelet[2589]: E0213 19:49:55.205403 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-27zq4" Feb 13 19:49:55.205688 kubelet[2589]: E0213 19:49:55.205426 2589 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-27zq4" Feb 13 19:49:55.205850 kubelet[2589]: E0213 19:49:55.205469 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-27zq4_kube-system(92ce11ad-0c53-4174-8895-91b95bbb2b8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-27zq4_kube-system(92ce11ad-0c53-4174-8895-91b95bbb2b8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-27zq4" podUID="92ce11ad-0c53-4174-8895-91b95bbb2b8b" Feb 13 19:49:55.206340 containerd[1464]: time="2025-02-13T19:49:55.206301419Z" level=error msg="encountered an error cleaning up failed sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.206459 containerd[1464]: time="2025-02-13T19:49:55.206435572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6wc8l,Uid:e18d693f-e3ac-4db7-8c9c-6652e5baff8f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.206679 kubelet[2589]: E0213 19:49:55.206643 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.206896 kubelet[2589]: E0213 19:49:55.206817 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6wc8l" Feb 13 19:49:55.206896 kubelet[2589]: E0213 19:49:55.206836 2589 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6wc8l" Feb 13 19:49:55.207020 kubelet[2589]: E0213 19:49:55.206866 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6wc8l_kube-system(e18d693f-e3ac-4db7-8c9c-6652e5baff8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6wc8l_kube-system(e18d693f-e3ac-4db7-8c9c-6652e5baff8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6wc8l" podUID="e18d693f-e3ac-4db7-8c9c-6652e5baff8f" Feb 13 19:49:55.209568 containerd[1464]: time="2025-02-13T19:49:55.209519018Z" level=error msg="Failed to destroy network for sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.210246 containerd[1464]: time="2025-02-13T19:49:55.210038746Z" level=error msg="encountered an error cleaning up failed sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.210246 containerd[1464]: time="2025-02-13T19:49:55.210107735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7899dc9d6d-55s42,Uid:63834bf5-a120-4a06-bb8c-91897696367c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.210381 kubelet[2589]: E0213 19:49:55.210347 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.210458 kubelet[2589]: E0213 19:49:55.210403 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7899dc9d6d-55s42" Feb 13 19:49:55.210458 kubelet[2589]: E0213 19:49:55.210426 2589 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7899dc9d6d-55s42" Feb 13 19:49:55.210793 kubelet[2589]: E0213 19:49:55.210461 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7899dc9d6d-55s42_calico-system(63834bf5-a120-4a06-bb8c-91897696367c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7899dc9d6d-55s42_calico-system(63834bf5-a120-4a06-bb8c-91897696367c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7899dc9d6d-55s42" podUID="63834bf5-a120-4a06-bb8c-91897696367c" Feb 13 19:49:55.217435 containerd[1464]: time="2025-02-13T19:49:55.217385539Z" level=error msg="Failed to destroy network for sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.217787 containerd[1464]: time="2025-02-13T19:49:55.217762528Z" level=error msg="encountered an error cleaning up failed sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.217843 containerd[1464]: time="2025-02-13T19:49:55.217806731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57797d99cf-j56gq,Uid:607000e1-6cc7-4c34-945a-f49ad59d4c78,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.218000 kubelet[2589]: E0213 19:49:55.217969 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.218038 kubelet[2589]: E0213 19:49:55.218017 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57797d99cf-j56gq" Feb 13 19:49:55.218065 kubelet[2589]: E0213 19:49:55.218040 2589 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57797d99cf-j56gq" Feb 13 19:49:55.218102 kubelet[2589]: E0213 19:49:55.218081 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57797d99cf-j56gq_calico-apiserver(607000e1-6cc7-4c34-945a-f49ad59d4c78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57797d99cf-j56gq_calico-apiserver(607000e1-6cc7-4c34-945a-f49ad59d4c78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57797d99cf-j56gq" podUID="607000e1-6cc7-4c34-945a-f49ad59d4c78" Feb 13 19:49:55.224822 containerd[1464]: time="2025-02-13T19:49:55.224633175Z" level=error msg="Failed to destroy network for sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.225105 containerd[1464]: time="2025-02-13T19:49:55.225073172Z" level=error msg="encountered an error cleaning up failed sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.225139 containerd[1464]: time="2025-02-13T19:49:55.225120722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57797d99cf-hqvzt,Uid:e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.225319 kubelet[2589]: E0213 19:49:55.225294 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.225376 kubelet[2589]: E0213 19:49:55.225325 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57797d99cf-hqvzt" Feb 13 19:49:55.225376 kubelet[2589]: E0213 19:49:55.225340 2589 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57797d99cf-hqvzt" Feb 13 19:49:55.225453 kubelet[2589]: E0213 19:49:55.225369 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57797d99cf-hqvzt_calico-apiserver(e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57797d99cf-hqvzt_calico-apiserver(e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57797d99cf-hqvzt" podUID="e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24" Feb 13 19:49:55.344197 systemd[1]: Created slice kubepods-besteffort-pode45413ed_22f7_42ee_a226_c017caa2ef3a.slice - libcontainer container kubepods-besteffort-pode45413ed_22f7_42ee_a226_c017caa2ef3a.slice. Feb 13 19:49:55.346401 containerd[1464]: time="2025-02-13T19:49:55.346344655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gx2wj,Uid:e45413ed-22f7-42ee-a226-c017caa2ef3a,Namespace:calico-system,Attempt:0,}" Feb 13 19:49:55.408280 containerd[1464]: time="2025-02-13T19:49:55.408224656Z" level=error msg="Failed to destroy network for sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.408661 containerd[1464]: time="2025-02-13T19:49:55.408636109Z" level=error msg="encountered an error cleaning up failed sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.408706 containerd[1464]: time="2025-02-13T19:49:55.408691323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gx2wj,Uid:e45413ed-22f7-42ee-a226-c017caa2ef3a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.408954 kubelet[2589]: E0213 19:49:55.408898 2589 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.408954 kubelet[2589]: E0213 19:49:55.408960 2589 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gx2wj" Feb 13 19:49:55.409231 kubelet[2589]: E0213 19:49:55.408980 2589 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gx2wj" Feb 13 19:49:55.409231 kubelet[2589]: E0213 19:49:55.409043 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gx2wj_calico-system(e45413ed-22f7-42ee-a226-c017caa2ef3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gx2wj_calico-system(e45413ed-22f7-42ee-a226-c017caa2ef3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gx2wj" podUID="e45413ed-22f7-42ee-a226-c017caa2ef3a" Feb 13 19:49:55.551807 kubelet[2589]: I0213 19:49:55.551664 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:49:55.553214 containerd[1464]: time="2025-02-13T19:49:55.552503283Z" level=info msg="StopPodSandbox for \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\"" Feb 13 19:49:55.553214 containerd[1464]: time="2025-02-13T19:49:55.552674044Z" level=info msg="Ensure that sandbox 2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9 in task-service has been cleanup successfully" Feb 13 19:49:55.554319 kubelet[2589]: E0213 19:49:55.554292 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:49:55.559739 containerd[1464]: time="2025-02-13T19:49:55.557076852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:49:55.559739 containerd[1464]: time="2025-02-13T19:49:55.558427090Z" level=info msg="StopPodSandbox for \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\"" Feb 13 19:49:55.559739 containerd[1464]: time="2025-02-13T19:49:55.558619342Z" level=info msg="Ensure that sandbox bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42 in task-service has been cleanup successfully" Feb 13 19:49:55.559894 kubelet[2589]: I0213 19:49:55.557754 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:49:55.561735 kubelet[2589]: I0213 19:49:55.561681 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:49:55.562243 containerd[1464]: time="2025-02-13T19:49:55.562142807Z" level=info msg="StopPodSandbox for \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\"" Feb 13 19:49:55.562330 containerd[1464]: time="2025-02-13T19:49:55.562304641Z" level=info msg="Ensure that sandbox a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c in task-service has been cleanup successfully" Feb 13 19:49:55.563413 kubelet[2589]: I0213 19:49:55.563182 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:49:55.564600 containerd[1464]: time="2025-02-13T19:49:55.564569189Z" level=info msg="StopPodSandbox for \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\"" Feb 13 19:49:55.564832 containerd[1464]: time="2025-02-13T19:49:55.564753414Z" level=info msg="Ensure that sandbox 9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305 in task-service has been cleanup successfully" Feb 13 19:49:55.565279 kubelet[2589]: I0213 19:49:55.565252 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:49:55.566017 containerd[1464]: time="2025-02-13T19:49:55.565639140Z" level=info msg="StopPodSandbox for \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\"" Feb 13 19:49:55.566017 containerd[1464]: time="2025-02-13T19:49:55.565773713Z" level=info msg="Ensure that sandbox edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c in task-service has been cleanup successfully" Feb 13 19:49:55.571819 kubelet[2589]: I0213 19:49:55.571785 2589 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:49:55.572270 containerd[1464]: time="2025-02-13T19:49:55.572236754Z" level=info msg="StopPodSandbox for \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\"" Feb 13 19:49:55.572526 containerd[1464]: time="2025-02-13T19:49:55.572503576Z" level=info msg="Ensure that sandbox c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8 in task-service has been cleanup successfully" Feb 13 19:49:55.606877 containerd[1464]: time="2025-02-13T19:49:55.606817309Z" level=error msg="StopPodSandbox for \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\" failed" error="failed to destroy network for sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.607368 kubelet[2589]: E0213 19:49:55.607330 2589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:49:55.607446 kubelet[2589]: E0213 19:49:55.607395 2589 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9"} Feb 13 19:49:55.607484 kubelet[2589]: E0213 19:49:55.607450 2589 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92ce11ad-0c53-4174-8895-91b95bbb2b8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:49:55.607484 kubelet[2589]: E0213 19:49:55.607472 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92ce11ad-0c53-4174-8895-91b95bbb2b8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-27zq4" podUID="92ce11ad-0c53-4174-8895-91b95bbb2b8b" Feb 13 19:49:55.612335 containerd[1464]: time="2025-02-13T19:49:55.612288375Z" level=error msg="StopPodSandbox for \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\" failed" error="failed to destroy network for sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.612531 kubelet[2589]: E0213 19:49:55.612500 2589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:49:55.612585 kubelet[2589]: E0213 19:49:55.612541 2589 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305"} Feb 13 19:49:55.612620 kubelet[2589]: E0213 19:49:55.612603 2589 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"607000e1-6cc7-4c34-945a-f49ad59d4c78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:49:55.612687 kubelet[2589]: E0213 19:49:55.612627 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"607000e1-6cc7-4c34-945a-f49ad59d4c78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57797d99cf-j56gq" podUID="607000e1-6cc7-4c34-945a-f49ad59d4c78" Feb 13 19:49:55.613951 containerd[1464]: time="2025-02-13T19:49:55.613817270Z" level=error msg="StopPodSandbox for \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\" failed" error="failed to destroy network for sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.614188 kubelet[2589]: E0213 19:49:55.614167 2589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:49:55.614372 kubelet[2589]: E0213 19:49:55.614284 2589 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42"} Feb 13 19:49:55.614372 kubelet[2589]: E0213 19:49:55.614310 2589 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e45413ed-22f7-42ee-a226-c017caa2ef3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:49:55.614372 kubelet[2589]: E0213 19:49:55.614328 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e45413ed-22f7-42ee-a226-c017caa2ef3a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gx2wj" podUID="e45413ed-22f7-42ee-a226-c017caa2ef3a" Feb 13 19:49:55.622409 containerd[1464]: time="2025-02-13T19:49:55.622354721Z" level=error msg="StopPodSandbox for \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\" failed" error="failed to destroy network for sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.622931 kubelet[2589]: E0213 19:49:55.622788 2589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:49:55.622931 kubelet[2589]: E0213 19:49:55.622836 2589 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c"} Feb 13 19:49:55.622931 kubelet[2589]: E0213 19:49:55.622871 2589 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:49:55.622931 kubelet[2589]: E0213 19:49:55.622900 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57797d99cf-hqvzt" podUID="e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24" Feb 13 19:49:55.624585 containerd[1464]: time="2025-02-13T19:49:55.624555068Z" level=error msg="StopPodSandbox for \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\" failed" error="failed to destroy network for sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.624693 kubelet[2589]: E0213 19:49:55.624672 2589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:49:55.624794 kubelet[2589]: E0213 19:49:55.624698 2589 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c"} Feb 13 19:49:55.624794 kubelet[2589]: E0213 19:49:55.624738 2589 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63834bf5-a120-4a06-bb8c-91897696367c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:49:55.624794 kubelet[2589]: E0213 19:49:55.624755 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63834bf5-a120-4a06-bb8c-91897696367c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7899dc9d6d-55s42" podUID="63834bf5-a120-4a06-bb8c-91897696367c" Feb 13 19:49:55.627128 containerd[1464]: time="2025-02-13T19:49:55.627083492Z" level=error msg="StopPodSandbox for \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\" failed" error="failed to destroy network for sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:49:55.627243 kubelet[2589]: E0213 19:49:55.627216 2589 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:49:55.627288 kubelet[2589]: E0213 19:49:55.627242 2589 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8"} Feb 13 19:49:55.627336 kubelet[2589]: E0213 19:49:55.627321 2589 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e18d693f-e3ac-4db7-8c9c-6652e5baff8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 19:49:55.627376 kubelet[2589]: E0213 19:49:55.627341 2589 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e18d693f-e3ac-4db7-8c9c-6652e5baff8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6wc8l" podUID="e18d693f-e3ac-4db7-8c9c-6652e5baff8f" Feb 13 19:49:55.728862 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9-shm.mount: Deactivated successfully. Feb 13 19:49:57.611255 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:60902.service - OpenSSH per-connection server daemon (10.0.0.1:60902). Feb 13 19:49:57.653290 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 60902 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:49:57.654889 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:57.659044 systemd-logind[1450]: New session 9 of user core. Feb 13 19:49:57.663848 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:49:57.771373 sshd[3747]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:57.775734 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:60902.service: Deactivated successfully. Feb 13 19:49:57.777808 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:49:57.778405 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:49:57.779296 systemd-logind[1450]: Removed session 9. Feb 13 19:50:00.980683 kubelet[2589]: I0213 19:50:00.980604 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:50:00.981484 kubelet[2589]: E0213 19:50:00.981464 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:01.170271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3879146945.mount: Deactivated successfully. Feb 13 19:50:01.612989 kubelet[2589]: E0213 19:50:01.612963 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:02.008731 containerd[1464]: time="2025-02-13T19:50:02.008661918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:02.009363 containerd[1464]: time="2025-02-13T19:50:02.009329903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:50:02.010452 containerd[1464]: time="2025-02-13T19:50:02.010426072Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:02.012397 containerd[1464]: time="2025-02-13T19:50:02.012370504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:02.012960 containerd[1464]: time="2025-02-13T19:50:02.012927070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.455800024s" Feb 13 19:50:02.012996 containerd[1464]: time="2025-02-13T19:50:02.012966313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:50:02.020943 containerd[1464]: time="2025-02-13T19:50:02.020864910Z" level=info msg="CreateContainer within sandbox \"1904594a43ccf16b003589322225f0c0e1ba8a920bef5ee11577f03c20af3625\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:50:02.047508 containerd[1464]: time="2025-02-13T19:50:02.047445538Z" level=info msg="CreateContainer within sandbox \"1904594a43ccf16b003589322225f0c0e1ba8a920bef5ee11577f03c20af3625\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a91e242f5800ca29895fd03d92e72f53643cf571c889d6f53fe71593e2bd5143\"" Feb 13 19:50:02.048305 containerd[1464]: time="2025-02-13T19:50:02.048063309Z" level=info msg="StartContainer for \"a91e242f5800ca29895fd03d92e72f53643cf571c889d6f53fe71593e2bd5143\"" Feb 13 19:50:02.135870 systemd[1]: Started cri-containerd-a91e242f5800ca29895fd03d92e72f53643cf571c889d6f53fe71593e2bd5143.scope - libcontainer container a91e242f5800ca29895fd03d92e72f53643cf571c889d6f53fe71593e2bd5143. Feb 13 19:50:02.180179 containerd[1464]: time="2025-02-13T19:50:02.180125622Z" level=info msg="StartContainer for \"a91e242f5800ca29895fd03d92e72f53643cf571c889d6f53fe71593e2bd5143\" returns successfully" Feb 13 19:50:02.239003 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:50:02.239146 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:50:02.616524 kubelet[2589]: E0213 19:50:02.616483 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:02.782537 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:60910.service - OpenSSH per-connection server daemon (10.0.0.1:60910). Feb 13 19:50:02.824323 sshd[3839]: Accepted publickey for core from 10.0.0.1 port 60910 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:02.826080 sshd[3839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:02.829685 systemd-logind[1450]: New session 10 of user core. Feb 13 19:50:02.842844 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:50:02.952953 sshd[3839]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:02.962868 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:60910.service: Deactivated successfully. Feb 13 19:50:02.964679 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:50:02.966291 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:50:02.971994 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:60918.service - OpenSSH per-connection server daemon (10.0.0.1:60918). Feb 13 19:50:02.972954 systemd-logind[1450]: Removed session 10. Feb 13 19:50:03.006170 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 60918 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:03.007618 sshd[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:03.011467 systemd-logind[1450]: New session 11 of user core. Feb 13 19:50:03.025837 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:50:03.161366 sshd[3855]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:03.171904 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:60918.service: Deactivated successfully. Feb 13 19:50:03.174846 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:50:03.178334 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:50:03.185968 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:60934.service - OpenSSH per-connection server daemon (10.0.0.1:60934). Feb 13 19:50:03.187025 systemd-logind[1450]: Removed session 11. Feb 13 19:50:03.220897 sshd[3867]: Accepted publickey for core from 10.0.0.1 port 60934 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:03.222112 sshd[3867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:03.225953 systemd-logind[1450]: New session 12 of user core. Feb 13 19:50:03.245873 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:50:03.347172 sshd[3867]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:03.351448 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:60934.service: Deactivated successfully. Feb 13 19:50:03.353170 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:50:03.353924 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:50:03.354783 systemd-logind[1450]: Removed session 12. Feb 13 19:50:03.617663 kubelet[2589]: I0213 19:50:03.617543 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:50:03.618235 kubelet[2589]: E0213 19:50:03.618214 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:03.958745 kernel: bpftool[4005]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:50:04.178351 systemd-networkd[1399]: vxlan.calico: Link UP Feb 13 19:50:04.178362 systemd-networkd[1399]: vxlan.calico: Gained carrier Feb 13 19:50:05.018068 kubelet[2589]: I0213 19:50:05.018017 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:50:05.018861 kubelet[2589]: E0213 19:50:05.018840 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:05.444870 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Feb 13 19:50:07.340587 containerd[1464]: time="2025-02-13T19:50:07.340209925Z" level=info msg="StopPodSandbox for \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\"" Feb 13 19:50:07.340587 containerd[1464]: time="2025-02-13T19:50:07.340265680Z" level=info msg="StopPodSandbox for \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\"" Feb 13 19:50:07.340587 containerd[1464]: time="2025-02-13T19:50:07.340372410Z" level=info msg="StopPodSandbox for \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\"" Feb 13 19:50:07.784957 kubelet[2589]: I0213 19:50:07.784881 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hcwrk" podStartSLOduration=6.607146885 podStartE2EDuration="24.7848626s" podCreationTimestamp="2025-02-13 19:49:43 +0000 UTC" firstStartedPulling="2025-02-13 19:49:43.835834275 +0000 UTC m=+21.574844324" lastFinishedPulling="2025-02-13 19:50:02.01354999 +0000 UTC m=+39.752560039" observedRunningTime="2025-02-13 19:50:02.670055034 +0000 UTC m=+40.409065083" watchObservedRunningTime="2025-02-13 19:50:07.7848626 +0000 UTC m=+45.523872649" Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4173] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4173] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" iface="eth0" netns="/var/run/netns/cni-7138ae51-35de-9208-af49-e22f18799828" Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4173] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" iface="eth0" netns="/var/run/netns/cni-7138ae51-35de-9208-af49-e22f18799828" Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4173] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" iface="eth0" netns="/var/run/netns/cni-7138ae51-35de-9208-af49-e22f18799828" Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4173] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.835 [INFO][4195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" HandleID="k8s-pod-network.c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.836 [INFO][4195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.836 [INFO][4195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.844 [WARNING][4195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" HandleID="k8s-pod-network.c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.844 [INFO][4195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" HandleID="k8s-pod-network.c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.846 [INFO][4195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:07.850987 containerd[1464]: 2025-02-13 19:50:07.848 [INFO][4173] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:07.853223 containerd[1464]: time="2025-02-13T19:50:07.851773118Z" level=info msg="TearDown network for sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\" successfully" Feb 13 19:50:07.853223 containerd[1464]: time="2025-02-13T19:50:07.851810718Z" level=info msg="StopPodSandbox for \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\" returns successfully" Feb 13 19:50:07.853305 kubelet[2589]: E0213 19:50:07.852217 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:07.854058 containerd[1464]: time="2025-02-13T19:50:07.853992615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6wc8l,Uid:e18d693f-e3ac-4db7-8c9c-6652e5baff8f,Namespace:kube-system,Attempt:1,}" Feb 13 19:50:07.854827 systemd[1]: run-netns-cni\x2d7138ae51\x2d35de\x2d9208\x2daf49\x2de22f18799828.mount: Deactivated successfully. Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.787 [INFO][4172] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.787 [INFO][4172] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" iface="eth0" netns="/var/run/netns/cni-b589212d-8b01-1e33-24c4-22867e1a2247" Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4172] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" iface="eth0" netns="/var/run/netns/cni-b589212d-8b01-1e33-24c4-22867e1a2247" Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4172] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" iface="eth0" netns="/var/run/netns/cni-b589212d-8b01-1e33-24c4-22867e1a2247" Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4172] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.835 [INFO][4196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" HandleID="k8s-pod-network.bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.837 [INFO][4196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.845 [INFO][4196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.850 [WARNING][4196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" HandleID="k8s-pod-network.bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.850 [INFO][4196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" HandleID="k8s-pod-network.bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.852 [INFO][4196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:07.858613 containerd[1464]: 2025-02-13 19:50:07.855 [INFO][4172] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:07.859116 containerd[1464]: time="2025-02-13T19:50:07.858872487Z" level=info msg="TearDown network for sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\" successfully" Feb 13 19:50:07.859116 containerd[1464]: time="2025-02-13T19:50:07.858900269Z" level=info msg="StopPodSandbox for \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\" returns successfully" Feb 13 19:50:07.859905 containerd[1464]: time="2025-02-13T19:50:07.859700151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gx2wj,Uid:e45413ed-22f7-42ee-a226-c017caa2ef3a,Namespace:calico-system,Attempt:1,}" Feb 13 19:50:07.861504 systemd[1]: run-netns-cni\x2db589212d\x2d8b01\x2d1e33\x2d24c4\x2d22867e1a2247.mount: Deactivated successfully. Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.785 [INFO][4166] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.787 [INFO][4166] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" iface="eth0" netns="/var/run/netns/cni-169c1c8a-38e9-ef00-f1ac-029fc238f1db" Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.787 [INFO][4166] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" iface="eth0" netns="/var/run/netns/cni-169c1c8a-38e9-ef00-f1ac-029fc238f1db" Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4166] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" iface="eth0" netns="/var/run/netns/cni-169c1c8a-38e9-ef00-f1ac-029fc238f1db" Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4166] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.788 [INFO][4166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.836 [INFO][4197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" HandleID="k8s-pod-network.2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.837 [INFO][4197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.852 [INFO][4197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.858 [WARNING][4197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" HandleID="k8s-pod-network.2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.858 [INFO][4197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" HandleID="k8s-pod-network.2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.860 [INFO][4197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:07.865295 containerd[1464]: 2025-02-13 19:50:07.863 [INFO][4166] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:07.865622 containerd[1464]: time="2025-02-13T19:50:07.865448494Z" level=info msg="TearDown network for sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\" successfully" Feb 13 19:50:07.865622 containerd[1464]: time="2025-02-13T19:50:07.865473581Z" level=info msg="StopPodSandbox for \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\" returns successfully" Feb 13 19:50:07.865781 kubelet[2589]: E0213 19:50:07.865750 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:07.866375 containerd[1464]: time="2025-02-13T19:50:07.866162694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-27zq4,Uid:92ce11ad-0c53-4174-8895-91b95bbb2b8b,Namespace:kube-system,Attempt:1,}" Feb 13 19:50:07.867647 systemd[1]: run-netns-cni\x2d169c1c8a\x2d38e9\x2def00\x2df1ac\x2d029fc238f1db.mount: Deactivated successfully. Feb 13 19:50:07.999652 systemd-networkd[1399]: cali4a1e488792a: Link UP Feb 13 19:50:08.000156 systemd-networkd[1399]: cali4a1e488792a: Gained carrier Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.917 [INFO][4218] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0 coredns-7db6d8ff4d- kube-system e18d693f-e3ac-4db7-8c9c-6652e5baff8f 924 0 2025-02-13 19:49:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-6wc8l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4a1e488792a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6wc8l" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6wc8l-" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.917 [INFO][4218] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6wc8l" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.954 [INFO][4258] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" HandleID="k8s-pod-network.fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.967 [INFO][4258] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" HandleID="k8s-pod-network.fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f53e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-6wc8l", "timestamp":"2025-02-13 19:50:07.95403 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.967 [INFO][4258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.968 [INFO][4258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.968 [INFO][4258] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.970 [INFO][4258] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" host="localhost" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.976 [INFO][4258] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.980 [INFO][4258] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.981 [INFO][4258] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.982 [INFO][4258] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.982 [INFO][4258] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" host="localhost" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.984 [INFO][4258] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162 Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.987 [INFO][4258] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" host="localhost" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.991 [INFO][4258] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" host="localhost" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.991 [INFO][4258] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" host="localhost" Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.991 [INFO][4258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:08.016199 containerd[1464]: 2025-02-13 19:50:07.991 [INFO][4258] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" HandleID="k8s-pod-network.fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:08.017001 containerd[1464]: 2025-02-13 19:50:07.994 [INFO][4218] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6wc8l" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e18d693f-e3ac-4db7-8c9c-6652e5baff8f", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-6wc8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a1e488792a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:08.017001 containerd[1464]: 2025-02-13 19:50:07.995 [INFO][4218] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6wc8l" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:08.017001 containerd[1464]: 2025-02-13 19:50:07.995 [INFO][4218] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a1e488792a ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6wc8l" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:08.017001 containerd[1464]: 2025-02-13 19:50:07.999 [INFO][4218] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6wc8l" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:08.017001 containerd[1464]: 2025-02-13 19:50:08.000 [INFO][4218] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6wc8l" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e18d693f-e3ac-4db7-8c9c-6652e5baff8f", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162", Pod:"coredns-7db6d8ff4d-6wc8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a1e488792a", MAC:"66:af:04:ea:74:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:08.017261 containerd[1464]: 2025-02-13 19:50:08.009 [INFO][4218] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6wc8l" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:08.034464 systemd-networkd[1399]: calibf6592833b1: Link UP Feb 13 19:50:08.035096 systemd-networkd[1399]: calibf6592833b1: Gained carrier Feb 13 19:50:08.050838 containerd[1464]: time="2025-02-13T19:50:08.050657355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:08.050838 containerd[1464]: time="2025-02-13T19:50:08.050731995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:08.050838 containerd[1464]: time="2025-02-13T19:50:08.050744518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:08.050996 containerd[1464]: time="2025-02-13T19:50:08.050825300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:07.917 [INFO][4229] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gx2wj-eth0 csi-node-driver- calico-system e45413ed-22f7-42ee-a226-c017caa2ef3a 923 0 2025-02-13 19:49:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gx2wj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibf6592833b1 [] []}} ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Namespace="calico-system" Pod="csi-node-driver-gx2wj" WorkloadEndpoint="localhost-k8s-csi--node--driver--gx2wj-" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:07.917 [INFO][4229] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Namespace="calico-system" Pod="csi-node-driver-gx2wj" WorkloadEndpoint="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:07.970 [INFO][4259] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" HandleID="k8s-pod-network.0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:07.976 [INFO][4259] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" HandleID="k8s-pod-network.0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002908c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gx2wj", "timestamp":"2025-02-13 19:50:07.9700403 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:07.977 [INFO][4259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:07.991 [INFO][4259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:07.992 [INFO][4259] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:07.993 [INFO][4259] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" host="localhost" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:07.999 [INFO][4259] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:08.006 [INFO][4259] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:08.010 [INFO][4259] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:08.013 [INFO][4259] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:08.013 [INFO][4259] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" host="localhost" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:08.017 [INFO][4259] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0 Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:08.022 [INFO][4259] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" host="localhost" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:08.027 [INFO][4259] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" host="localhost" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:08.027 [INFO][4259] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" host="localhost" Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:08.027 [INFO][4259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:08.059841 containerd[1464]: 2025-02-13 19:50:08.027 [INFO][4259] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" HandleID="k8s-pod-network.0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:08.061080 containerd[1464]: 2025-02-13 19:50:08.030 [INFO][4229] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Namespace="calico-system" Pod="csi-node-driver-gx2wj" WorkloadEndpoint="localhost-k8s-csi--node--driver--gx2wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gx2wj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e45413ed-22f7-42ee-a226-c017caa2ef3a", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gx2wj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf6592833b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:08.061080 containerd[1464]: 2025-02-13 19:50:08.030 [INFO][4229] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Namespace="calico-system" Pod="csi-node-driver-gx2wj" WorkloadEndpoint="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:08.061080 containerd[1464]: 2025-02-13 19:50:08.030 [INFO][4229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf6592833b1 ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Namespace="calico-system" Pod="csi-node-driver-gx2wj" WorkloadEndpoint="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:08.061080 containerd[1464]: 2025-02-13 19:50:08.034 [INFO][4229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Namespace="calico-system" Pod="csi-node-driver-gx2wj" WorkloadEndpoint="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:08.061080 containerd[1464]: 2025-02-13 19:50:08.035 [INFO][4229] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Namespace="calico-system" Pod="csi-node-driver-gx2wj" WorkloadEndpoint="localhost-k8s-csi--node--driver--gx2wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gx2wj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e45413ed-22f7-42ee-a226-c017caa2ef3a", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0", Pod:"csi-node-driver-gx2wj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf6592833b1", MAC:"3a:38:d0:74:a8:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:08.061080 containerd[1464]: 2025-02-13 19:50:08.057 [INFO][4229] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0" Namespace="calico-system" Pod="csi-node-driver-gx2wj" WorkloadEndpoint="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:08.082236 systemd-networkd[1399]: cali18cfb30d2c2: Link UP Feb 13 19:50:08.083036 systemd-networkd[1399]: cali18cfb30d2c2: Gained carrier Feb 13 19:50:08.087175 systemd[1]: Started cri-containerd-fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162.scope - libcontainer container fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162. Feb 13 19:50:08.098994 containerd[1464]: time="2025-02-13T19:50:08.098859259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:08.098994 containerd[1464]: time="2025-02-13T19:50:08.098914724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:08.098994 containerd[1464]: time="2025-02-13T19:50:08.098936294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:07.924 [INFO][4246] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0 coredns-7db6d8ff4d- kube-system 92ce11ad-0c53-4174-8895-91b95bbb2b8b 922 0 2025-02-13 19:49:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-27zq4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18cfb30d2c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27zq4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27zq4-" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:07.924 [INFO][4246] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27zq4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:07.978 [INFO][4268] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" HandleID="k8s-pod-network.925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:07.984 [INFO][4268] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" HandleID="k8s-pod-network.925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002deaa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-27zq4", "timestamp":"2025-02-13 19:50:07.978785138 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:07.985 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.028 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.028 [INFO][4268] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.033 [INFO][4268] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" host="localhost" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.040 [INFO][4268] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.049 [INFO][4268] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.053 [INFO][4268] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.056 [INFO][4268] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.056 [INFO][4268] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" host="localhost" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.058 [INFO][4268] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.063 [INFO][4268] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" host="localhost" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.070 [INFO][4268] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" host="localhost" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.071 [INFO][4268] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" host="localhost" Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.071 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:08.099159 containerd[1464]: 2025-02-13 19:50:08.071 [INFO][4268] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" HandleID="k8s-pod-network.925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:08.099586 containerd[1464]: 2025-02-13 19:50:08.078 [INFO][4246] cni-plugin/k8s.go 386: Populated endpoint ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27zq4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92ce11ad-0c53-4174-8895-91b95bbb2b8b", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-27zq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18cfb30d2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:08.099586 containerd[1464]: 2025-02-13 19:50:08.078 [INFO][4246] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27zq4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:08.099586 containerd[1464]: 2025-02-13 19:50:08.078 [INFO][4246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18cfb30d2c2 ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27zq4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:08.099586 containerd[1464]: 2025-02-13 19:50:08.082 [INFO][4246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27zq4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:08.099586 containerd[1464]: 2025-02-13 19:50:08.083 [INFO][4246] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27zq4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92ce11ad-0c53-4174-8895-91b95bbb2b8b", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d", Pod:"coredns-7db6d8ff4d-27zq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18cfb30d2c2", MAC:"32:2d:2c:ba:71:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:08.099825 containerd[1464]: 2025-02-13 19:50:08.094 [INFO][4246] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-27zq4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:08.099825 containerd[1464]: time="2025-02-13T19:50:08.099019570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:08.106724 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:50:08.127981 systemd[1]: Started cri-containerd-0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0.scope - libcontainer container 0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0. Feb 13 19:50:08.135012 containerd[1464]: time="2025-02-13T19:50:08.132909775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:08.135012 containerd[1464]: time="2025-02-13T19:50:08.134485194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:08.135012 containerd[1464]: time="2025-02-13T19:50:08.134499390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:08.135012 containerd[1464]: time="2025-02-13T19:50:08.134704345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:08.140832 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:50:08.155854 containerd[1464]: time="2025-02-13T19:50:08.155515686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6wc8l,Uid:e18d693f-e3ac-4db7-8c9c-6652e5baff8f,Namespace:kube-system,Attempt:1,} returns sandbox id \"fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162\"" Feb 13 19:50:08.157138 kubelet[2589]: E0213 19:50:08.157113 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:08.160042 containerd[1464]: time="2025-02-13T19:50:08.160006317Z" level=info msg="CreateContainer within sandbox \"fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:50:08.162391 systemd[1]: Started cri-containerd-925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d.scope - libcontainer container 925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d. Feb 13 19:50:08.167538 containerd[1464]: time="2025-02-13T19:50:08.167504644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gx2wj,Uid:e45413ed-22f7-42ee-a226-c017caa2ef3a,Namespace:calico-system,Attempt:1,} returns sandbox id \"0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0\"" Feb 13 19:50:08.169575 containerd[1464]: time="2025-02-13T19:50:08.169531981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:50:08.177349 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:50:08.179232 containerd[1464]: time="2025-02-13T19:50:08.179189121Z" level=info msg="CreateContainer within sandbox \"fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab6fba3fdf010935d8147139b76fe8acd2f941a6910c0855d7a3fdd1eb612d0b\"" Feb 13 19:50:08.180063 containerd[1464]: time="2025-02-13T19:50:08.180015983Z" level=info msg="StartContainer for \"ab6fba3fdf010935d8147139b76fe8acd2f941a6910c0855d7a3fdd1eb612d0b\"" Feb 13 19:50:08.202305 containerd[1464]: time="2025-02-13T19:50:08.202248753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-27zq4,Uid:92ce11ad-0c53-4174-8895-91b95bbb2b8b,Namespace:kube-system,Attempt:1,} returns sandbox id \"925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d\"" Feb 13 19:50:08.203200 kubelet[2589]: E0213 19:50:08.203044 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:08.205681 containerd[1464]: time="2025-02-13T19:50:08.205653155Z" level=info msg="CreateContainer within sandbox \"925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:50:08.209988 systemd[1]: Started cri-containerd-ab6fba3fdf010935d8147139b76fe8acd2f941a6910c0855d7a3fdd1eb612d0b.scope - libcontainer container ab6fba3fdf010935d8147139b76fe8acd2f941a6910c0855d7a3fdd1eb612d0b. Feb 13 19:50:08.223708 containerd[1464]: time="2025-02-13T19:50:08.223649652Z" level=info msg="CreateContainer within sandbox \"925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ca12e7bb2dcecb34035126940deafed1bcae2059471bf7adb728947ba51493d\"" Feb 13 19:50:08.224367 containerd[1464]: time="2025-02-13T19:50:08.224220523Z" level=info msg="StartContainer for \"3ca12e7bb2dcecb34035126940deafed1bcae2059471bf7adb728947ba51493d\"" Feb 13 19:50:08.242229 containerd[1464]: time="2025-02-13T19:50:08.242171054Z" level=info msg="StartContainer for \"ab6fba3fdf010935d8147139b76fe8acd2f941a6910c0855d7a3fdd1eb612d0b\" returns successfully" Feb 13 19:50:08.255067 systemd[1]: Started cri-containerd-3ca12e7bb2dcecb34035126940deafed1bcae2059471bf7adb728947ba51493d.scope - libcontainer container 3ca12e7bb2dcecb34035126940deafed1bcae2059471bf7adb728947ba51493d. Feb 13 19:50:08.286120 containerd[1464]: time="2025-02-13T19:50:08.286005198Z" level=info msg="StartContainer for \"3ca12e7bb2dcecb34035126940deafed1bcae2059471bf7adb728947ba51493d\" returns successfully" Feb 13 19:50:08.339924 containerd[1464]: time="2025-02-13T19:50:08.339818167Z" level=info msg="StopPodSandbox for \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\"" Feb 13 19:50:08.358567 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:41048.service - OpenSSH per-connection server daemon (10.0.0.1:41048). Feb 13 19:50:08.438197 sshd[4553]: Accepted publickey for core from 10.0.0.1 port 41048 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:08.440217 sshd[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:08.444763 systemd-logind[1450]: New session 13 of user core. Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.383 [INFO][4546] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.383 [INFO][4546] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" iface="eth0" netns="/var/run/netns/cni-c1d968e3-a089-bf80-4d44-d076e24f81c4" Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.384 [INFO][4546] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" iface="eth0" netns="/var/run/netns/cni-c1d968e3-a089-bf80-4d44-d076e24f81c4" Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.385 [INFO][4546] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" iface="eth0" netns="/var/run/netns/cni-c1d968e3-a089-bf80-4d44-d076e24f81c4" Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.385 [INFO][4546] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.385 [INFO][4546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.434 [INFO][4557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" HandleID="k8s-pod-network.edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.434 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.434 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.439 [WARNING][4557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" HandleID="k8s-pod-network.edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.439 [INFO][4557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" HandleID="k8s-pod-network.edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.440 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:08.445677 containerd[1464]: 2025-02-13 19:50:08.443 [INFO][4546] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:08.446372 containerd[1464]: time="2025-02-13T19:50:08.445793376Z" level=info msg="TearDown network for sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\" successfully" Feb 13 19:50:08.446372 containerd[1464]: time="2025-02-13T19:50:08.445819094Z" level=info msg="StopPodSandbox for \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\" returns successfully" Feb 13 19:50:08.446552 containerd[1464]: time="2025-02-13T19:50:08.446521743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57797d99cf-hqvzt,Uid:e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:50:08.451897 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:50:08.570074 systemd-networkd[1399]: cali9e0ae89a9d6: Link UP Feb 13 19:50:08.572423 systemd-networkd[1399]: cali9e0ae89a9d6: Gained carrier Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.491 [INFO][4566] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0 calico-apiserver-57797d99cf- calico-apiserver e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24 946 0 2025-02-13 19:49:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57797d99cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57797d99cf-hqvzt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e0ae89a9d6 [] []}} ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-hqvzt" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.492 [INFO][4566] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-hqvzt" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.524 [INFO][4581] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" HandleID="k8s-pod-network.74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.533 [INFO][4581] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" HandleID="k8s-pod-network.74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000361480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57797d99cf-hqvzt", "timestamp":"2025-02-13 19:50:08.524013126 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.533 [INFO][4581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.533 [INFO][4581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.533 [INFO][4581] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.535 [INFO][4581] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" host="localhost" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.539 [INFO][4581] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.544 [INFO][4581] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.545 [INFO][4581] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.548 [INFO][4581] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.548 [INFO][4581] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" host="localhost" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.549 [INFO][4581] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666 Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.554 [INFO][4581] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" host="localhost" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.562 [INFO][4581] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" host="localhost" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.562 [INFO][4581] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" host="localhost" Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.562 [INFO][4581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:08.588956 containerd[1464]: 2025-02-13 19:50:08.562 [INFO][4581] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" HandleID="k8s-pod-network.74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:08.589661 containerd[1464]: 2025-02-13 19:50:08.566 [INFO][4566] cni-plugin/k8s.go 386: Populated endpoint ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-hqvzt" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0", GenerateName:"calico-apiserver-57797d99cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57797d99cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57797d99cf-hqvzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e0ae89a9d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:08.589661 containerd[1464]: 2025-02-13 19:50:08.567 [INFO][4566] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-hqvzt" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:08.589661 containerd[1464]: 2025-02-13 19:50:08.567 [INFO][4566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e0ae89a9d6 ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-hqvzt" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:08.589661 containerd[1464]: 2025-02-13 19:50:08.570 [INFO][4566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-hqvzt" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:08.589661 containerd[1464]: 2025-02-13 19:50:08.571 [INFO][4566] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-hqvzt" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0", GenerateName:"calico-apiserver-57797d99cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57797d99cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666", Pod:"calico-apiserver-57797d99cf-hqvzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e0ae89a9d6", MAC:"6e:4c:c3:e8:36:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:08.589661 containerd[1464]: 2025-02-13 19:50:08.584 [INFO][4566] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-hqvzt" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:08.597093 sshd[4553]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:08.601270 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:41048.service: Deactivated successfully. Feb 13 19:50:08.603890 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:50:08.606221 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:50:08.607552 systemd-logind[1450]: Removed session 13. Feb 13 19:50:08.612140 containerd[1464]: time="2025-02-13T19:50:08.611963073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:08.612140 containerd[1464]: time="2025-02-13T19:50:08.612029909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:08.612140 containerd[1464]: time="2025-02-13T19:50:08.612040228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:08.612397 containerd[1464]: time="2025-02-13T19:50:08.612119166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:08.632907 systemd[1]: Started cri-containerd-74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666.scope - libcontainer container 74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666. Feb 13 19:50:08.646410 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:50:08.655775 kubelet[2589]: E0213 19:50:08.655710 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:08.656380 kubelet[2589]: E0213 19:50:08.656352 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:08.665690 kubelet[2589]: I0213 19:50:08.665532 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6wc8l" podStartSLOduration=31.665510854 podStartE2EDuration="31.665510854s" podCreationTimestamp="2025-02-13 19:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:08.664463188 +0000 UTC m=+46.403473247" watchObservedRunningTime="2025-02-13 19:50:08.665510854 +0000 UTC m=+46.404520904" Feb 13 19:50:08.678463 containerd[1464]: time="2025-02-13T19:50:08.678394262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57797d99cf-hqvzt,Uid:e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666\"" Feb 13 19:50:08.682918 kubelet[2589]: I0213 19:50:08.682781 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-27zq4" podStartSLOduration=31.682758475 podStartE2EDuration="31.682758475s" podCreationTimestamp="2025-02-13 19:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:08.681826266 +0000 UTC m=+46.420836315" watchObservedRunningTime="2025-02-13 19:50:08.682758475 +0000 UTC m=+46.421768524" Feb 13 19:50:08.860748 systemd[1]: run-netns-cni\x2dc1d968e3\x2da089\x2dbf80\x2d4d44\x2dd076e24f81c4.mount: Deactivated successfully. Feb 13 19:50:09.284930 systemd-networkd[1399]: calibf6592833b1: Gained IPv6LL Feb 13 19:50:09.285741 systemd-networkd[1399]: cali4a1e488792a: Gained IPv6LL Feb 13 19:50:09.340602 containerd[1464]: time="2025-02-13T19:50:09.340533042Z" level=info msg="StopPodSandbox for \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\"" Feb 13 19:50:09.349816 systemd-networkd[1399]: cali18cfb30d2c2: Gained IPv6LL Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.500 [INFO][4675] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.500 [INFO][4675] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" iface="eth0" netns="/var/run/netns/cni-8ea49be7-e4e1-9f35-a632-fa1df4be617e" Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.500 [INFO][4675] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" iface="eth0" netns="/var/run/netns/cni-8ea49be7-e4e1-9f35-a632-fa1df4be617e" Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.500 [INFO][4675] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" iface="eth0" netns="/var/run/netns/cni-8ea49be7-e4e1-9f35-a632-fa1df4be617e" Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.500 [INFO][4675] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.500 [INFO][4675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.521 [INFO][4688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" HandleID="k8s-pod-network.9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.521 [INFO][4688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.522 [INFO][4688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.526 [WARNING][4688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" HandleID="k8s-pod-network.9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.526 [INFO][4688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" HandleID="k8s-pod-network.9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.527 [INFO][4688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:09.532520 containerd[1464]: 2025-02-13 19:50:09.530 [INFO][4675] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:09.533272 containerd[1464]: time="2025-02-13T19:50:09.533239747Z" level=info msg="TearDown network for sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\" successfully" Feb 13 19:50:09.533304 containerd[1464]: time="2025-02-13T19:50:09.533273971Z" level=info msg="StopPodSandbox for \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\" returns successfully" Feb 13 19:50:09.533884 containerd[1464]: time="2025-02-13T19:50:09.533861804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57797d99cf-j56gq,Uid:607000e1-6cc7-4c34-945a-f49ad59d4c78,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:50:09.535876 systemd[1]: run-netns-cni\x2d8ea49be7\x2de4e1\x2d9f35\x2da632\x2dfa1df4be617e.mount: Deactivated successfully. Feb 13 19:50:09.659653 kubelet[2589]: E0213 19:50:09.659585 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:09.659653 kubelet[2589]: E0213 19:50:09.659640 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:09.668897 systemd-networkd[1399]: cali9e0ae89a9d6: Gained IPv6LL Feb 13 19:50:10.077764 systemd-networkd[1399]: cali1aff0a6ec82: Link UP Feb 13 19:50:10.078952 systemd-networkd[1399]: cali1aff0a6ec82: Gained carrier Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.014 [INFO][4699] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0 calico-apiserver-57797d99cf- calico-apiserver 607000e1-6cc7-4c34-945a-f49ad59d4c78 968 0 2025-02-13 19:49:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57797d99cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57797d99cf-j56gq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1aff0a6ec82 [] []}} ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-j56gq" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--j56gq-" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.014 [INFO][4699] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-j56gq" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.041 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" HandleID="k8s-pod-network.091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.048 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" HandleID="k8s-pod-network.091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038ae80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57797d99cf-j56gq", "timestamp":"2025-02-13 19:50:10.041030797 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.049 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.049 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.049 [INFO][4712] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.050 [INFO][4712] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" host="localhost" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.055 [INFO][4712] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.058 [INFO][4712] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.060 [INFO][4712] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.062 [INFO][4712] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.062 [INFO][4712] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" host="localhost" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.063 [INFO][4712] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2 Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.067 [INFO][4712] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" host="localhost" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.072 [INFO][4712] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" host="localhost" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.072 [INFO][4712] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" host="localhost" Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.072 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:10.090564 containerd[1464]: 2025-02-13 19:50:10.073 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" HandleID="k8s-pod-network.091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:10.091185 containerd[1464]: 2025-02-13 19:50:10.075 [INFO][4699] cni-plugin/k8s.go 386: Populated endpoint ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-j56gq" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0", GenerateName:"calico-apiserver-57797d99cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"607000e1-6cc7-4c34-945a-f49ad59d4c78", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57797d99cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57797d99cf-j56gq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aff0a6ec82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:10.091185 containerd[1464]: 2025-02-13 19:50:10.075 [INFO][4699] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-j56gq" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:10.091185 containerd[1464]: 2025-02-13 19:50:10.075 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1aff0a6ec82 ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-j56gq" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:10.091185 containerd[1464]: 2025-02-13 19:50:10.078 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-j56gq" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:10.091185 containerd[1464]: 2025-02-13 19:50:10.079 [INFO][4699] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-j56gq" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0", GenerateName:"calico-apiserver-57797d99cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"607000e1-6cc7-4c34-945a-f49ad59d4c78", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57797d99cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2", Pod:"calico-apiserver-57797d99cf-j56gq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aff0a6ec82", MAC:"76:e7:5b:43:71:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:10.091185 containerd[1464]: 2025-02-13 19:50:10.087 [INFO][4699] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2" Namespace="calico-apiserver" Pod="calico-apiserver-57797d99cf-j56gq" WorkloadEndpoint="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:10.112960 containerd[1464]: time="2025-02-13T19:50:10.112820999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:10.112960 containerd[1464]: time="2025-02-13T19:50:10.112892533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:10.113133 containerd[1464]: time="2025-02-13T19:50:10.112907922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:10.113821 containerd[1464]: time="2025-02-13T19:50:10.113744071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:10.140904 systemd[1]: Started cri-containerd-091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2.scope - libcontainer container 091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2. Feb 13 19:50:10.153829 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:50:10.189171 containerd[1464]: time="2025-02-13T19:50:10.189130283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57797d99cf-j56gq,Uid:607000e1-6cc7-4c34-945a-f49ad59d4c78,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2\"" Feb 13 19:50:10.641651 containerd[1464]: time="2025-02-13T19:50:10.641594250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:10.642361 containerd[1464]: time="2025-02-13T19:50:10.642297480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:50:10.643789 containerd[1464]: time="2025-02-13T19:50:10.643758002Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:10.660692 containerd[1464]: time="2025-02-13T19:50:10.660670078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:10.661188 containerd[1464]: time="2025-02-13T19:50:10.661163925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.491588543s" Feb 13 19:50:10.661237 containerd[1464]: time="2025-02-13T19:50:10.661191427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:50:10.662377 containerd[1464]: time="2025-02-13T19:50:10.662081438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:50:10.662415 kubelet[2589]: E0213 19:50:10.662264 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:10.663115 containerd[1464]: time="2025-02-13T19:50:10.663091554Z" level=info msg="CreateContainer within sandbox \"0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:50:10.695982 containerd[1464]: time="2025-02-13T19:50:10.695933413Z" level=info msg="CreateContainer within sandbox \"0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f0a1b9ed82d46d72cb561487d08d330cc1c8eba83e961730e9bd4fabc49af1db\"" Feb 13 19:50:10.696723 containerd[1464]: time="2025-02-13T19:50:10.696680766Z" level=info msg="StartContainer for \"f0a1b9ed82d46d72cb561487d08d330cc1c8eba83e961730e9bd4fabc49af1db\"" Feb 13 19:50:10.725852 systemd[1]: Started cri-containerd-f0a1b9ed82d46d72cb561487d08d330cc1c8eba83e961730e9bd4fabc49af1db.scope - libcontainer container f0a1b9ed82d46d72cb561487d08d330cc1c8eba83e961730e9bd4fabc49af1db. Feb 13 19:50:10.753847 containerd[1464]: time="2025-02-13T19:50:10.753807652Z" level=info msg="StartContainer for \"f0a1b9ed82d46d72cb561487d08d330cc1c8eba83e961730e9bd4fabc49af1db\" returns successfully" Feb 13 19:50:11.340527 containerd[1464]: time="2025-02-13T19:50:11.340482856Z" level=info msg="StopPodSandbox for \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\"" Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.379 [INFO][4831] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.380 [INFO][4831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" iface="eth0" netns="/var/run/netns/cni-ae9c9a78-97cb-1f02-90c6-b1922786638e" Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.380 [INFO][4831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" iface="eth0" netns="/var/run/netns/cni-ae9c9a78-97cb-1f02-90c6-b1922786638e" Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.380 [INFO][4831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" iface="eth0" netns="/var/run/netns/cni-ae9c9a78-97cb-1f02-90c6-b1922786638e" Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.380 [INFO][4831] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.381 [INFO][4831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.399 [INFO][4839] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" HandleID="k8s-pod-network.a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.399 [INFO][4839] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.399 [INFO][4839] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.403 [WARNING][4839] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" HandleID="k8s-pod-network.a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.404 [INFO][4839] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" HandleID="k8s-pod-network.a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.405 [INFO][4839] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:11.409546 containerd[1464]: 2025-02-13 19:50:11.407 [INFO][4831] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:11.409962 containerd[1464]: time="2025-02-13T19:50:11.409752049Z" level=info msg="TearDown network for sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\" successfully" Feb 13 19:50:11.409962 containerd[1464]: time="2025-02-13T19:50:11.409777817Z" level=info msg="StopPodSandbox for \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\" returns successfully" Feb 13 19:50:11.410482 containerd[1464]: time="2025-02-13T19:50:11.410447263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7899dc9d6d-55s42,Uid:63834bf5-a120-4a06-bb8c-91897696367c,Namespace:calico-system,Attempt:1,}" Feb 13 19:50:11.413134 systemd[1]: run-netns-cni\x2dae9c9a78\x2d97cb\x2d1f02\x2d90c6\x2db1922786638e.mount: Deactivated successfully. Feb 13 19:50:11.517381 systemd-networkd[1399]: cali79d7835298f: Link UP Feb 13 19:50:11.517620 systemd-networkd[1399]: cali79d7835298f: Gained carrier Feb 13 19:50:11.525891 systemd-networkd[1399]: cali1aff0a6ec82: Gained IPv6LL Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.457 [INFO][4846] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0 calico-kube-controllers-7899dc9d6d- calico-system 63834bf5-a120-4a06-bb8c-91897696367c 1003 0 2025-02-13 19:49:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7899dc9d6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7899dc9d6d-55s42 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali79d7835298f [] []}} ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Namespace="calico-system" Pod="calico-kube-controllers-7899dc9d6d-55s42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.457 [INFO][4846] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Namespace="calico-system" Pod="calico-kube-controllers-7899dc9d6d-55s42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.480 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" HandleID="k8s-pod-network.65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.489 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" HandleID="k8s-pod-network.65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000502350), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7899dc9d6d-55s42", "timestamp":"2025-02-13 19:50:11.480809837 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.489 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.489 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.489 [INFO][4860] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.491 [INFO][4860] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" host="localhost" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.494 [INFO][4860] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.497 [INFO][4860] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.499 [INFO][4860] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.501 [INFO][4860] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.501 [INFO][4860] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" host="localhost" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.502 [INFO][4860] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.506 [INFO][4860] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" host="localhost" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.511 [INFO][4860] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" host="localhost" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.511 [INFO][4860] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" host="localhost" Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.511 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:11.528795 containerd[1464]: 2025-02-13 19:50:11.511 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" HandleID="k8s-pod-network.65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:11.529273 containerd[1464]: 2025-02-13 19:50:11.514 [INFO][4846] cni-plugin/k8s.go 386: Populated endpoint ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Namespace="calico-system" Pod="calico-kube-controllers-7899dc9d6d-55s42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0", GenerateName:"calico-kube-controllers-7899dc9d6d-", Namespace:"calico-system", SelfLink:"", UID:"63834bf5-a120-4a06-bb8c-91897696367c", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7899dc9d6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7899dc9d6d-55s42", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79d7835298f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:11.529273 containerd[1464]: 2025-02-13 19:50:11.514 [INFO][4846] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Namespace="calico-system" Pod="calico-kube-controllers-7899dc9d6d-55s42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:11.529273 containerd[1464]: 2025-02-13 19:50:11.514 [INFO][4846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79d7835298f ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Namespace="calico-system" Pod="calico-kube-controllers-7899dc9d6d-55s42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:11.529273 containerd[1464]: 2025-02-13 19:50:11.516 [INFO][4846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Namespace="calico-system" Pod="calico-kube-controllers-7899dc9d6d-55s42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:11.529273 containerd[1464]: 2025-02-13 19:50:11.516 [INFO][4846] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Namespace="calico-system" Pod="calico-kube-controllers-7899dc9d6d-55s42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0", GenerateName:"calico-kube-controllers-7899dc9d6d-", Namespace:"calico-system", SelfLink:"", UID:"63834bf5-a120-4a06-bb8c-91897696367c", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7899dc9d6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd", Pod:"calico-kube-controllers-7899dc9d6d-55s42", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79d7835298f", MAC:"56:7a:6d:5e:67:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:11.529273 containerd[1464]: 2025-02-13 19:50:11.523 [INFO][4846] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd" Namespace="calico-system" Pod="calico-kube-controllers-7899dc9d6d-55s42" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:11.551642 containerd[1464]: time="2025-02-13T19:50:11.551351417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:11.551642 containerd[1464]: time="2025-02-13T19:50:11.551434933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:11.551642 containerd[1464]: time="2025-02-13T19:50:11.551496749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:11.551642 containerd[1464]: time="2025-02-13T19:50:11.551639117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:11.585853 systemd[1]: Started cri-containerd-65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd.scope - libcontainer container 65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd. Feb 13 19:50:11.597660 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:50:11.621390 containerd[1464]: time="2025-02-13T19:50:11.621333867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7899dc9d6d-55s42,Uid:63834bf5-a120-4a06-bb8c-91897696367c,Namespace:calico-system,Attempt:1,} returns sandbox id \"65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd\"" Feb 13 19:50:13.316968 systemd-networkd[1399]: cali79d7835298f: Gained IPv6LL Feb 13 19:50:13.611120 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:41062.service - OpenSSH per-connection server daemon (10.0.0.1:41062). Feb 13 19:50:13.659776 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 41062 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:13.661660 sshd[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:13.665809 systemd-logind[1450]: New session 14 of user core. Feb 13 19:50:13.672058 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:50:13.805462 sshd[4927]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:13.812089 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:41062.service: Deactivated successfully. Feb 13 19:50:13.817434 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:50:13.818491 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:50:13.820141 systemd-logind[1450]: Removed session 14. Feb 13 19:50:14.120874 containerd[1464]: time="2025-02-13T19:50:14.120810644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:14.121648 containerd[1464]: time="2025-02-13T19:50:14.121564659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:50:14.122809 containerd[1464]: time="2025-02-13T19:50:14.122764551Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:14.125088 containerd[1464]: time="2025-02-13T19:50:14.125044058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:14.125707 containerd[1464]: time="2025-02-13T19:50:14.125676495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.463567386s" Feb 13 19:50:14.125907 containerd[1464]: time="2025-02-13T19:50:14.125708606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:50:14.127224 containerd[1464]: time="2025-02-13T19:50:14.127198091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:50:14.128726 containerd[1464]: time="2025-02-13T19:50:14.128675664Z" level=info msg="CreateContainer within sandbox \"74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:50:14.146495 containerd[1464]: time="2025-02-13T19:50:14.146362037Z" level=info msg="CreateContainer within sandbox \"74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d91ae92a38617403e91aac8258b84a219180a9a04fce0f251648ccc29f16624a\"" Feb 13 19:50:14.146932 containerd[1464]: time="2025-02-13T19:50:14.146905107Z" level=info msg="StartContainer for \"d91ae92a38617403e91aac8258b84a219180a9a04fce0f251648ccc29f16624a\"" Feb 13 19:50:14.173572 systemd[1]: run-containerd-runc-k8s.io-d91ae92a38617403e91aac8258b84a219180a9a04fce0f251648ccc29f16624a-runc.8fUdJ3.mount: Deactivated successfully. Feb 13 19:50:14.183915 systemd[1]: Started cri-containerd-d91ae92a38617403e91aac8258b84a219180a9a04fce0f251648ccc29f16624a.scope - libcontainer container d91ae92a38617403e91aac8258b84a219180a9a04fce0f251648ccc29f16624a. Feb 13 19:50:14.225096 containerd[1464]: time="2025-02-13T19:50:14.225054944Z" level=info msg="StartContainer for \"d91ae92a38617403e91aac8258b84a219180a9a04fce0f251648ccc29f16624a\" returns successfully" Feb 13 19:50:14.722050 containerd[1464]: time="2025-02-13T19:50:14.721995998Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:14.723021 containerd[1464]: time="2025-02-13T19:50:14.722944948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:50:14.725100 containerd[1464]: time="2025-02-13T19:50:14.725067372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 597.830749ms" Feb 13 19:50:14.725168 containerd[1464]: time="2025-02-13T19:50:14.725106645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:50:14.727615 containerd[1464]: time="2025-02-13T19:50:14.727576461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:50:14.728476 containerd[1464]: time="2025-02-13T19:50:14.728445452Z" level=info msg="CreateContainer within sandbox \"091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:50:14.744440 containerd[1464]: time="2025-02-13T19:50:14.744391769Z" level=info msg="CreateContainer within sandbox \"091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"563b6841746181aa7a3b7656693b05db57af54780da187ba4987fcc84255a6f6\"" Feb 13 19:50:14.745101 containerd[1464]: time="2025-02-13T19:50:14.745081343Z" level=info msg="StartContainer for \"563b6841746181aa7a3b7656693b05db57af54780da187ba4987fcc84255a6f6\"" Feb 13 19:50:14.782008 systemd[1]: Started cri-containerd-563b6841746181aa7a3b7656693b05db57af54780da187ba4987fcc84255a6f6.scope - libcontainer container 563b6841746181aa7a3b7656693b05db57af54780da187ba4987fcc84255a6f6. Feb 13 19:50:14.826224 containerd[1464]: time="2025-02-13T19:50:14.826182169Z" level=info msg="StartContainer for \"563b6841746181aa7a3b7656693b05db57af54780da187ba4987fcc84255a6f6\" returns successfully" Feb 13 19:50:15.674951 kubelet[2589]: I0213 19:50:15.674867 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:50:15.787295 kubelet[2589]: I0213 19:50:15.786582 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57797d99cf-hqvzt" podStartSLOduration=27.343578158 podStartE2EDuration="32.786562522s" podCreationTimestamp="2025-02-13 19:49:43 +0000 UTC" firstStartedPulling="2025-02-13 19:50:08.683882596 +0000 UTC m=+46.422892645" lastFinishedPulling="2025-02-13 19:50:14.12686696 +0000 UTC m=+51.865877009" observedRunningTime="2025-02-13 19:50:14.70768005 +0000 UTC m=+52.446690119" watchObservedRunningTime="2025-02-13 19:50:15.786562522 +0000 UTC m=+53.525572571" Feb 13 19:50:15.978649 kubelet[2589]: I0213 19:50:15.978450 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57797d99cf-j56gq" podStartSLOduration=28.44312394 podStartE2EDuration="32.978430454s" podCreationTimestamp="2025-02-13 19:49:43 +0000 UTC" firstStartedPulling="2025-02-13 19:50:10.190559637 +0000 UTC m=+47.929569686" lastFinishedPulling="2025-02-13 19:50:14.725866151 +0000 UTC m=+52.464876200" observedRunningTime="2025-02-13 19:50:15.78673199 +0000 UTC m=+53.525742039" watchObservedRunningTime="2025-02-13 19:50:15.978430454 +0000 UTC m=+53.717440503" Feb 13 19:50:16.767999 containerd[1464]: time="2025-02-13T19:50:16.767929403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:16.768729 containerd[1464]: time="2025-02-13T19:50:16.768673470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:50:16.769991 containerd[1464]: time="2025-02-13T19:50:16.769946800Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:16.772159 containerd[1464]: time="2025-02-13T19:50:16.772133333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:16.772768 containerd[1464]: time="2025-02-13T19:50:16.772726496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.045107255s" Feb 13 19:50:16.772842 containerd[1464]: time="2025-02-13T19:50:16.772771610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:50:16.774243 containerd[1464]: time="2025-02-13T19:50:16.774069797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:50:16.775321 containerd[1464]: time="2025-02-13T19:50:16.775282462Z" level=info msg="CreateContainer within sandbox \"0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:50:16.790884 containerd[1464]: time="2025-02-13T19:50:16.790824519Z" level=info msg="CreateContainer within sandbox \"0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"51633f2a37678ff90ae5efd5011c2dd29f152c69223287c0e4a64bd7d1e91aa5\"" Feb 13 19:50:16.791321 containerd[1464]: time="2025-02-13T19:50:16.791275294Z" level=info msg="StartContainer for \"51633f2a37678ff90ae5efd5011c2dd29f152c69223287c0e4a64bd7d1e91aa5\"" Feb 13 19:50:16.826847 systemd[1]: Started cri-containerd-51633f2a37678ff90ae5efd5011c2dd29f152c69223287c0e4a64bd7d1e91aa5.scope - libcontainer container 51633f2a37678ff90ae5efd5011c2dd29f152c69223287c0e4a64bd7d1e91aa5. Feb 13 19:50:16.855677 containerd[1464]: time="2025-02-13T19:50:16.855621074Z" level=info msg="StartContainer for \"51633f2a37678ff90ae5efd5011c2dd29f152c69223287c0e4a64bd7d1e91aa5\" returns successfully" Feb 13 19:50:17.407122 kubelet[2589]: I0213 19:50:17.407094 2589 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:50:17.407122 kubelet[2589]: I0213 19:50:17.407124 2589 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:50:17.690469 kubelet[2589]: I0213 19:50:17.690082 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gx2wj" podStartSLOduration=26.085290481 podStartE2EDuration="34.69006414s" podCreationTimestamp="2025-02-13 19:49:43 +0000 UTC" firstStartedPulling="2025-02-13 19:50:08.169118845 +0000 UTC m=+45.908128894" lastFinishedPulling="2025-02-13 19:50:16.773892504 +0000 UTC m=+54.512902553" observedRunningTime="2025-02-13 19:50:17.689609558 +0000 UTC m=+55.428619627" watchObservedRunningTime="2025-02-13 19:50:17.69006414 +0000 UTC m=+55.429074199" Feb 13 19:50:18.817785 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:60384.service - OpenSSH per-connection server daemon (10.0.0.1:60384). Feb 13 19:50:18.885536 sshd[5084]: Accepted publickey for core from 10.0.0.1 port 60384 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:18.887217 sshd[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:18.891406 systemd-logind[1450]: New session 15 of user core. Feb 13 19:50:18.899901 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:50:19.035389 sshd[5084]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:19.040330 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:60384.service: Deactivated successfully. Feb 13 19:50:19.042411 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:50:19.043400 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:50:19.044315 systemd-logind[1450]: Removed session 15. Feb 13 19:50:20.345734 containerd[1464]: time="2025-02-13T19:50:20.345656629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:20.347390 containerd[1464]: time="2025-02-13T19:50:20.347352271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:50:20.348812 containerd[1464]: time="2025-02-13T19:50:20.348774078Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:20.352234 containerd[1464]: time="2025-02-13T19:50:20.352187964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:20.352738 containerd[1464]: time="2025-02-13T19:50:20.352688082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.57858369s" Feb 13 19:50:20.352771 containerd[1464]: time="2025-02-13T19:50:20.352740100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:50:20.363567 containerd[1464]: time="2025-02-13T19:50:20.363485391Z" level=info msg="CreateContainer within sandbox \"65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:50:20.378558 containerd[1464]: time="2025-02-13T19:50:20.378499973Z" level=info msg="CreateContainer within sandbox \"65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"816cad23455d7a85731518be3146e4c077fd6f961d2911d54274cf6a96bace4b\"" Feb 13 19:50:20.379011 containerd[1464]: time="2025-02-13T19:50:20.378978281Z" level=info msg="StartContainer for \"816cad23455d7a85731518be3146e4c077fd6f961d2911d54274cf6a96bace4b\"" Feb 13 19:50:20.418889 systemd[1]: Started cri-containerd-816cad23455d7a85731518be3146e4c077fd6f961d2911d54274cf6a96bace4b.scope - libcontainer container 816cad23455d7a85731518be3146e4c077fd6f961d2911d54274cf6a96bace4b. Feb 13 19:50:20.486674 containerd[1464]: time="2025-02-13T19:50:20.486612475Z" level=info msg="StartContainer for \"816cad23455d7a85731518be3146e4c077fd6f961d2911d54274cf6a96bace4b\" returns successfully" Feb 13 19:50:20.994643 kubelet[2589]: I0213 19:50:20.994570 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7899dc9d6d-55s42" podStartSLOduration=29.26359322 podStartE2EDuration="37.994544018s" podCreationTimestamp="2025-02-13 19:49:43 +0000 UTC" firstStartedPulling="2025-02-13 19:50:11.622301223 +0000 UTC m=+49.361311262" lastFinishedPulling="2025-02-13 19:50:20.353252011 +0000 UTC m=+58.092262060" observedRunningTime="2025-02-13 19:50:20.76167771 +0000 UTC m=+58.500687749" watchObservedRunningTime="2025-02-13 19:50:20.994544018 +0000 UTC m=+58.733554067" Feb 13 19:50:22.324676 containerd[1464]: time="2025-02-13T19:50:22.324626999Z" level=info msg="StopPodSandbox for \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\"" Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.360 [WARNING][5176] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92ce11ad-0c53-4174-8895-91b95bbb2b8b", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d", Pod:"coredns-7db6d8ff4d-27zq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18cfb30d2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.360 [INFO][5176] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.360 [INFO][5176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" iface="eth0" netns="" Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.360 [INFO][5176] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.360 [INFO][5176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.381 [INFO][5185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" HandleID="k8s-pod-network.2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.381 [INFO][5185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.382 [INFO][5185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.386 [WARNING][5185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" HandleID="k8s-pod-network.2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.386 [INFO][5185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" HandleID="k8s-pod-network.2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.388 [INFO][5185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:22.393154 containerd[1464]: 2025-02-13 19:50:22.390 [INFO][5176] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:22.394016 containerd[1464]: time="2025-02-13T19:50:22.393198051Z" level=info msg="TearDown network for sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\" successfully" Feb 13 19:50:22.394016 containerd[1464]: time="2025-02-13T19:50:22.393228989Z" level=info msg="StopPodSandbox for \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\" returns successfully" Feb 13 19:50:22.394016 containerd[1464]: time="2025-02-13T19:50:22.393690836Z" level=info msg="RemovePodSandbox for \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\"" Feb 13 19:50:22.395811 containerd[1464]: time="2025-02-13T19:50:22.395783382Z" level=info msg="Forcibly stopping sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\"" Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.427 [WARNING][5209] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92ce11ad-0c53-4174-8895-91b95bbb2b8b", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"925728ce16ed8ae3889b1a0bc1c3e68b334e0a0b0ce119f1cdd44f8190a01a4d", Pod:"coredns-7db6d8ff4d-27zq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18cfb30d2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.427 [INFO][5209] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.427 [INFO][5209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" iface="eth0" netns="" Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.427 [INFO][5209] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.427 [INFO][5209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.447 [INFO][5216] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" HandleID="k8s-pod-network.2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.447 [INFO][5216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.447 [INFO][5216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.452 [WARNING][5216] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" HandleID="k8s-pod-network.2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.452 [INFO][5216] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" HandleID="k8s-pod-network.2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Workload="localhost-k8s-coredns--7db6d8ff4d--27zq4-eth0" Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.453 [INFO][5216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:22.457462 containerd[1464]: 2025-02-13 19:50:22.455 [INFO][5209] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9" Feb 13 19:50:22.457898 containerd[1464]: time="2025-02-13T19:50:22.457505595Z" level=info msg="TearDown network for sandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\" successfully" Feb 13 19:50:22.467805 containerd[1464]: time="2025-02-13T19:50:22.467765875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:22.467873 containerd[1464]: time="2025-02-13T19:50:22.467854421Z" level=info msg="RemovePodSandbox \"2d9dab500388d7666e26d4c081836733e130348d8feed6720b2baf05cf0021d9\" returns successfully" Feb 13 19:50:22.468495 containerd[1464]: time="2025-02-13T19:50:22.468447704Z" level=info msg="StopPodSandbox for \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\"" Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.507 [WARNING][5238] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0", GenerateName:"calico-apiserver-57797d99cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57797d99cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666", Pod:"calico-apiserver-57797d99cf-hqvzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e0ae89a9d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.507 [INFO][5238] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.507 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" iface="eth0" netns="" Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.507 [INFO][5238] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.507 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.527 [INFO][5246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" HandleID="k8s-pod-network.edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.527 [INFO][5246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.527 [INFO][5246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.533 [WARNING][5246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" HandleID="k8s-pod-network.edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.533 [INFO][5246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" HandleID="k8s-pod-network.edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.535 [INFO][5246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:22.539832 containerd[1464]: 2025-02-13 19:50:22.537 [INFO][5238] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:22.540235 containerd[1464]: time="2025-02-13T19:50:22.539857853Z" level=info msg="TearDown network for sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\" successfully" Feb 13 19:50:22.540235 containerd[1464]: time="2025-02-13T19:50:22.539884883Z" level=info msg="StopPodSandbox for \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\" returns successfully" Feb 13 19:50:22.540451 containerd[1464]: time="2025-02-13T19:50:22.540407434Z" level=info msg="RemovePodSandbox for \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\"" Feb 13 19:50:22.540483 containerd[1464]: time="2025-02-13T19:50:22.540453090Z" level=info msg="Forcibly stopping sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\"" Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.572 [WARNING][5270] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0", GenerateName:"calico-apiserver-57797d99cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8cf9ce0-cc8d-4deb-a6e9-a97d42930d24", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57797d99cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74b4efdec38f4cfdc5f7391a41978272352a2200b8ffe8a19a10fccd1dec1666", Pod:"calico-apiserver-57797d99cf-hqvzt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e0ae89a9d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.573 [INFO][5270] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.573 [INFO][5270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" iface="eth0" netns="" Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.573 [INFO][5270] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.573 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.591 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" HandleID="k8s-pod-network.edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.591 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.591 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.597 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" HandleID="k8s-pod-network.edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.597 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" HandleID="k8s-pod-network.edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Workload="localhost-k8s-calico--apiserver--57797d99cf--hqvzt-eth0" Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.598 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:22.603378 containerd[1464]: 2025-02-13 19:50:22.600 [INFO][5270] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c" Feb 13 19:50:22.603378 containerd[1464]: time="2025-02-13T19:50:22.603356558Z" level=info msg="TearDown network for sandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\" successfully" Feb 13 19:50:22.608224 containerd[1464]: time="2025-02-13T19:50:22.608173846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:22.608273 containerd[1464]: time="2025-02-13T19:50:22.608238778Z" level=info msg="RemovePodSandbox \"edf7ef7bccd44ac2be7da32673b4ac5ea31ece943738eaede1fe20d1f43f5e2c\" returns successfully" Feb 13 19:50:22.608738 containerd[1464]: time="2025-02-13T19:50:22.608683984Z" level=info msg="StopPodSandbox for \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\"" Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.640 [WARNING][5300] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0", GenerateName:"calico-kube-controllers-7899dc9d6d-", Namespace:"calico-system", SelfLink:"", UID:"63834bf5-a120-4a06-bb8c-91897696367c", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7899dc9d6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd", Pod:"calico-kube-controllers-7899dc9d6d-55s42", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79d7835298f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.640 [INFO][5300] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.640 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" iface="eth0" netns="" Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.640 [INFO][5300] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.640 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.659 [INFO][5307] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" HandleID="k8s-pod-network.a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.659 [INFO][5307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.659 [INFO][5307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.663 [WARNING][5307] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" HandleID="k8s-pod-network.a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.663 [INFO][5307] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" HandleID="k8s-pod-network.a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.665 [INFO][5307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:22.669413 containerd[1464]: 2025-02-13 19:50:22.667 [INFO][5300] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:22.669828 containerd[1464]: time="2025-02-13T19:50:22.669443029Z" level=info msg="TearDown network for sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\" successfully" Feb 13 19:50:22.669828 containerd[1464]: time="2025-02-13T19:50:22.669466563Z" level=info msg="StopPodSandbox for \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\" returns successfully" Feb 13 19:50:22.670113 containerd[1464]: time="2025-02-13T19:50:22.670068563Z" level=info msg="RemovePodSandbox for \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\"" Feb 13 19:50:22.670113 containerd[1464]: time="2025-02-13T19:50:22.670107065Z" level=info msg="Forcibly stopping sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\"" Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.706 [WARNING][5330] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0", GenerateName:"calico-kube-controllers-7899dc9d6d-", Namespace:"calico-system", SelfLink:"", UID:"63834bf5-a120-4a06-bb8c-91897696367c", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7899dc9d6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65d2aa2d5e12ba703fe52695e93fde0476ff8c176f54ccfe22d4ef55b3a300dd", Pod:"calico-kube-controllers-7899dc9d6d-55s42", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79d7835298f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.706 [INFO][5330] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.706 [INFO][5330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" iface="eth0" netns="" Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.706 [INFO][5330] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.706 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.724 [INFO][5337] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" HandleID="k8s-pod-network.a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.724 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.724 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.853 [WARNING][5337] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" HandleID="k8s-pod-network.a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.853 [INFO][5337] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" HandleID="k8s-pod-network.a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Workload="localhost-k8s-calico--kube--controllers--7899dc9d6d--55s42-eth0" Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.855 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:22.859330 containerd[1464]: 2025-02-13 19:50:22.857 [INFO][5330] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c" Feb 13 19:50:22.859330 containerd[1464]: time="2025-02-13T19:50:22.859296690Z" level=info msg="TearDown network for sandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\" successfully" Feb 13 19:50:22.949360 containerd[1464]: time="2025-02-13T19:50:22.949332775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:22.949447 containerd[1464]: time="2025-02-13T19:50:22.949406302Z" level=info msg="RemovePodSandbox \"a7d13edceea77c5e784000bf413e9724d1faa5236f454f4e6463c9e741e2121c\" returns successfully" Feb 13 19:50:22.949924 containerd[1464]: time="2025-02-13T19:50:22.949873058Z" level=info msg="StopPodSandbox for \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\"" Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:22.981 [WARNING][5360] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gx2wj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e45413ed-22f7-42ee-a226-c017caa2ef3a", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0", Pod:"csi-node-driver-gx2wj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf6592833b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:22.981 [INFO][5360] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:22.981 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" iface="eth0" netns="" Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:22.982 [INFO][5360] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:22.982 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:22.999 [INFO][5368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" HandleID="k8s-pod-network.bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:23.000 [INFO][5368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:23.000 [INFO][5368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:23.010 [WARNING][5368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" HandleID="k8s-pod-network.bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:23.010 [INFO][5368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" HandleID="k8s-pod-network.bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:23.012 [INFO][5368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:23.017330 containerd[1464]: 2025-02-13 19:50:23.015 [INFO][5360] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:23.017884 containerd[1464]: time="2025-02-13T19:50:23.017370986Z" level=info msg="TearDown network for sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\" successfully" Feb 13 19:50:23.017884 containerd[1464]: time="2025-02-13T19:50:23.017395522Z" level=info msg="StopPodSandbox for \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\" returns successfully" Feb 13 19:50:23.018123 containerd[1464]: time="2025-02-13T19:50:23.018080337Z" level=info msg="RemovePodSandbox for \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\"" Feb 13 19:50:23.018178 containerd[1464]: time="2025-02-13T19:50:23.018126353Z" level=info msg="Forcibly stopping sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\"" Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.049 [WARNING][5391] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gx2wj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e45413ed-22f7-42ee-a226-c017caa2ef3a", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0944d99da3b439eb2eaccb8bc7a0d6b2006fa11392a56180a9cef4c99afcfae0", Pod:"csi-node-driver-gx2wj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf6592833b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.049 [INFO][5391] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.049 [INFO][5391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" iface="eth0" netns="" Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.049 [INFO][5391] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.049 [INFO][5391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.067 [INFO][5399] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" HandleID="k8s-pod-network.bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.068 [INFO][5399] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.068 [INFO][5399] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.073 [WARNING][5399] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" HandleID="k8s-pod-network.bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.073 [INFO][5399] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" HandleID="k8s-pod-network.bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Workload="localhost-k8s-csi--node--driver--gx2wj-eth0" Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.074 [INFO][5399] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:23.078977 containerd[1464]: 2025-02-13 19:50:23.076 [INFO][5391] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42" Feb 13 19:50:23.079400 containerd[1464]: time="2025-02-13T19:50:23.079035267Z" level=info msg="TearDown network for sandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\" successfully" Feb 13 19:50:23.173784 containerd[1464]: time="2025-02-13T19:50:23.173627495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:23.173784 containerd[1464]: time="2025-02-13T19:50:23.173690834Z" level=info msg="RemovePodSandbox \"bc29b7f9ae21f90c223d6c4a84ed211d6bf9cff04227fc2bae47ca7658ca3a42\" returns successfully" Feb 13 19:50:23.174160 containerd[1464]: time="2025-02-13T19:50:23.174135167Z" level=info msg="StopPodSandbox for \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\"" Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.206 [WARNING][5421] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0", GenerateName:"calico-apiserver-57797d99cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"607000e1-6cc7-4c34-945a-f49ad59d4c78", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57797d99cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2", Pod:"calico-apiserver-57797d99cf-j56gq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aff0a6ec82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.206 [INFO][5421] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.206 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" iface="eth0" netns="" Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.206 [INFO][5421] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.206 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.226 [INFO][5428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" HandleID="k8s-pod-network.9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.226 [INFO][5428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.226 [INFO][5428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.231 [WARNING][5428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" HandleID="k8s-pod-network.9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.231 [INFO][5428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" HandleID="k8s-pod-network.9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.232 [INFO][5428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:23.237477 containerd[1464]: 2025-02-13 19:50:23.235 [INFO][5421] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:23.237909 containerd[1464]: time="2025-02-13T19:50:23.237517478Z" level=info msg="TearDown network for sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\" successfully" Feb 13 19:50:23.237909 containerd[1464]: time="2025-02-13T19:50:23.237542645Z" level=info msg="StopPodSandbox for \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\" returns successfully" Feb 13 19:50:23.238054 containerd[1464]: time="2025-02-13T19:50:23.238026587Z" level=info msg="RemovePodSandbox for \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\"" Feb 13 19:50:23.238094 containerd[1464]: time="2025-02-13T19:50:23.238053998Z" level=info msg="Forcibly stopping sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\"" Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.271 [WARNING][5451] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0", GenerateName:"calico-apiserver-57797d99cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"607000e1-6cc7-4c34-945a-f49ad59d4c78", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57797d99cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"091515c6d4696d0b5d17538fd53d889618811730b4ee5a8a4afcfcdebf8e39f2", Pod:"calico-apiserver-57797d99cf-j56gq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1aff0a6ec82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.271 [INFO][5451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.271 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" iface="eth0" netns="" Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.271 [INFO][5451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.271 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.290 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" HandleID="k8s-pod-network.9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.290 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.291 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.295 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" HandleID="k8s-pod-network.9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.295 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" HandleID="k8s-pod-network.9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Workload="localhost-k8s-calico--apiserver--57797d99cf--j56gq-eth0" Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.296 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:23.300892 containerd[1464]: 2025-02-13 19:50:23.298 [INFO][5451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305" Feb 13 19:50:23.301458 containerd[1464]: time="2025-02-13T19:50:23.301408891Z" level=info msg="TearDown network for sandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\" successfully" Feb 13 19:50:23.305259 containerd[1464]: time="2025-02-13T19:50:23.305227992Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:23.305312 containerd[1464]: time="2025-02-13T19:50:23.305275753Z" level=info msg="RemovePodSandbox \"9329d6e0ca51e4b104e157df76eb0420da5dfd4f5dec7cec6560df19c9862305\" returns successfully" Feb 13 19:50:23.305779 containerd[1464]: time="2025-02-13T19:50:23.305749454Z" level=info msg="StopPodSandbox for \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\"" Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.340 [WARNING][5481] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e18d693f-e3ac-4db7-8c9c-6652e5baff8f", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162", Pod:"coredns-7db6d8ff4d-6wc8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a1e488792a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.340 [INFO][5481] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.340 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" iface="eth0" netns="" Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.340 [INFO][5481] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.340 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.359 [INFO][5488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" HandleID="k8s-pod-network.c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.359 [INFO][5488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.359 [INFO][5488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.364 [WARNING][5488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" HandleID="k8s-pod-network.c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.364 [INFO][5488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" HandleID="k8s-pod-network.c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.366 [INFO][5488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:23.370782 containerd[1464]: 2025-02-13 19:50:23.368 [INFO][5481] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:23.371433 containerd[1464]: time="2025-02-13T19:50:23.370819637Z" level=info msg="TearDown network for sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\" successfully" Feb 13 19:50:23.371433 containerd[1464]: time="2025-02-13T19:50:23.370843483Z" level=info msg="StopPodSandbox for \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\" returns successfully" Feb 13 19:50:23.371433 containerd[1464]: time="2025-02-13T19:50:23.371353061Z" level=info msg="RemovePodSandbox for \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\"" Feb 13 19:50:23.371433 containerd[1464]: time="2025-02-13T19:50:23.371386976Z" level=info msg="Forcibly stopping sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\"" Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.409 [WARNING][5510] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e18d693f-e3ac-4db7-8c9c-6652e5baff8f", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbf6c1915a3b307d43f5702d021ee805bba66057f201557bde3913cdbfa85162", Pod:"coredns-7db6d8ff4d-6wc8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a1e488792a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.409 [INFO][5510] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.409 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" iface="eth0" netns="" Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.409 [INFO][5510] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.409 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.429 [INFO][5517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" HandleID="k8s-pod-network.c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.429 [INFO][5517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.430 [INFO][5517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.435 [WARNING][5517] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" HandleID="k8s-pod-network.c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.435 [INFO][5517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" HandleID="k8s-pod-network.c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Workload="localhost-k8s-coredns--7db6d8ff4d--6wc8l-eth0" Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.436 [INFO][5517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:50:23.441946 containerd[1464]: 2025-02-13 19:50:23.439 [INFO][5510] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8" Feb 13 19:50:23.441946 containerd[1464]: time="2025-02-13T19:50:23.441920686Z" level=info msg="TearDown network for sandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\" successfully" Feb 13 19:50:23.446012 containerd[1464]: time="2025-02-13T19:50:23.445984658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:50:23.446071 containerd[1464]: time="2025-02-13T19:50:23.446032890Z" level=info msg="RemovePodSandbox \"c5cad5415e8985397d5313bba2384c262882b846622428a57b7ce606c25ff6b8\" returns successfully" Feb 13 19:50:24.048469 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:60392.service - OpenSSH per-connection server daemon (10.0.0.1:60392). Feb 13 19:50:24.093472 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 60392 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:24.095222 sshd[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:24.099046 systemd-logind[1450]: New session 16 of user core. Feb 13 19:50:24.109858 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:50:24.325026 sshd[5525]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:24.336590 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:60392.service: Deactivated successfully. Feb 13 19:50:24.338417 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:50:24.340046 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:50:24.348039 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:60406.service - OpenSSH per-connection server daemon (10.0.0.1:60406). Feb 13 19:50:24.349019 systemd-logind[1450]: Removed session 16. Feb 13 19:50:24.381690 sshd[5539]: Accepted publickey for core from 10.0.0.1 port 60406 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:24.383198 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:24.386998 systemd-logind[1450]: New session 17 of user core. Feb 13 19:50:24.402851 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:50:24.585557 sshd[5539]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:24.596628 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:60406.service: Deactivated successfully. Feb 13 19:50:24.598435 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:50:24.600255 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:50:24.601566 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:60408.service - OpenSSH per-connection server daemon (10.0.0.1:60408). Feb 13 19:50:24.602844 systemd-logind[1450]: Removed session 17. Feb 13 19:50:24.657659 sshd[5557]: Accepted publickey for core from 10.0.0.1 port 60408 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:24.659191 sshd[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:24.662913 systemd-logind[1450]: New session 18 of user core. Feb 13 19:50:24.681833 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:50:26.823046 sshd[5557]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:26.836101 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:60408.service: Deactivated successfully. Feb 13 19:50:26.839867 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:50:26.843818 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:50:26.852593 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:47552.service - OpenSSH per-connection server daemon (10.0.0.1:47552). Feb 13 19:50:26.853827 systemd-logind[1450]: Removed session 18. Feb 13 19:50:26.891281 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 47552 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:26.892973 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:26.897212 systemd-logind[1450]: New session 19 of user core. Feb 13 19:50:26.906860 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:50:27.293872 sshd[5598]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:27.302159 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:47552.service: Deactivated successfully. Feb 13 19:50:27.304149 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:50:27.305828 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:50:27.314124 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:47556.service - OpenSSH per-connection server daemon (10.0.0.1:47556). Feb 13 19:50:27.315156 systemd-logind[1450]: Removed session 19. Feb 13 19:50:27.348067 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 47556 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:27.349975 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:27.354160 systemd-logind[1450]: New session 20 of user core. Feb 13 19:50:27.366877 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:50:27.498555 sshd[5610]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:27.502787 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:47556.service: Deactivated successfully. Feb 13 19:50:27.504796 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:50:27.505484 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:50:27.506317 systemd-logind[1450]: Removed session 20. Feb 13 19:50:32.516367 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:47566.service - OpenSSH per-connection server daemon (10.0.0.1:47566). Feb 13 19:50:32.553669 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 47566 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:32.555661 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:32.559545 systemd-logind[1450]: New session 21 of user core. Feb 13 19:50:32.568908 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:50:32.676132 sshd[5629]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:32.680185 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:47566.service: Deactivated successfully. Feb 13 19:50:32.682326 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:50:32.682930 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:50:32.683861 systemd-logind[1450]: Removed session 21. Feb 13 19:50:35.074626 kubelet[2589]: E0213 19:50:35.074586 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:37.695949 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:50534.service - OpenSSH per-connection server daemon (10.0.0.1:50534). Feb 13 19:50:37.734324 sshd[5665]: Accepted publickey for core from 10.0.0.1 port 50534 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:37.736245 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:37.741013 systemd-logind[1450]: New session 22 of user core. Feb 13 19:50:37.753945 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:50:37.866468 sshd[5665]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:37.871784 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:50534.service: Deactivated successfully. Feb 13 19:50:37.874466 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:50:37.875211 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:50:37.876520 systemd-logind[1450]: Removed session 22. Feb 13 19:50:38.340305 kubelet[2589]: E0213 19:50:38.340258 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:42.877327 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:50546.service - OpenSSH per-connection server daemon (10.0.0.1:50546). Feb 13 19:50:42.922455 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 50546 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:42.924219 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:42.929440 systemd-logind[1450]: New session 23 of user core. Feb 13 19:50:42.933858 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:50:43.171097 sshd[5683]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:43.175343 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:50:43.175542 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:50546.service: Deactivated successfully. Feb 13 19:50:43.177426 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:50:43.179598 systemd-logind[1450]: Removed session 23. Feb 13 19:50:44.339900 kubelet[2589]: E0213 19:50:44.339865 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:50:45.468755 kubelet[2589]: I0213 19:50:45.468060 2589 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:50:48.189009 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:54672.service - OpenSSH per-connection server daemon (10.0.0.1:54672). Feb 13 19:50:48.225158 sshd[5705]: Accepted publickey for core from 10.0.0.1 port 54672 ssh2: RSA SHA256:w6wKJ467a9+7tw3THl4xthj/6d03LGshuXCeFa4eatw Feb 13 19:50:48.226875 sshd[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:48.230790 systemd-logind[1450]: New session 24 of user core. Feb 13 19:50:48.242845 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:50:48.348774 sshd[5705]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:48.353040 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:54672.service: Deactivated successfully. Feb 13 19:50:48.355092 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:50:48.355740 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:50:48.356564 systemd-logind[1450]: Removed session 24. Feb 13 19:50:49.340384 kubelet[2589]: E0213 19:50:49.340328 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"