Feb 13 20:00:29.862753 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:00:29.862774 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:00:29.862785 kernel: BIOS-provided physical RAM map: Feb 13 20:00:29.862791 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:00:29.862797 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:00:29.862803 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:00:29.862811 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 20:00:29.862818 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 20:00:29.862825 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 20:00:29.862834 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 20:00:29.862840 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:00:29.862847 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:00:29.862855 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 20:00:29.862862 kernel: NX (Execute Disable) protection: active Feb 13 20:00:29.862871 kernel: APIC: Static calls initialized Feb 13 20:00:29.862880 kernel: SMBIOS 2.8 present. Feb 13 20:00:29.862887 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 20:00:29.862894 kernel: Hypervisor detected: KVM Feb 13 20:00:29.862901 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:00:29.862908 kernel: kvm-clock: using sched offset of 2184826430 cycles Feb 13 20:00:29.862915 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:00:29.862922 kernel: tsc: Detected 2794.750 MHz processor Feb 13 20:00:29.862929 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:00:29.862937 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:00:29.862944 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 20:00:29.862953 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:00:29.862961 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:00:29.862967 kernel: Using GB pages for direct mapping Feb 13 20:00:29.862974 kernel: ACPI: Early table checksum verification disabled Feb 13 20:00:29.862981 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 20:00:29.862988 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:00:29.862995 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:00:29.863002 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:00:29.863011 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 20:00:29.863018 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:00:29.863025 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:00:29.863032 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:00:29.863039 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:00:29.863046 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 20:00:29.863053 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 20:00:29.863063 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 20:00:29.863190 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 20:00:29.863197 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 20:00:29.863204 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 20:00:29.863212 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 20:00:29.863218 kernel: No NUMA configuration found Feb 13 20:00:29.863226 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 20:00:29.863244 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 20:00:29.863254 kernel: Zone ranges: Feb 13 20:00:29.863270 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:00:29.863286 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 20:00:29.863301 kernel: Normal empty Feb 13 20:00:29.863317 kernel: Movable zone start for each node Feb 13 20:00:29.863346 kernel: Early memory node ranges Feb 13 20:00:29.863355 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:00:29.863376 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 20:00:29.863384 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 20:00:29.863394 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:00:29.863401 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:00:29.863408 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 20:00:29.863416 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:00:29.863423 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:00:29.863430 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:00:29.863437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:00:29.863444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:00:29.863451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:00:29.863461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:00:29.863468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:00:29.863475 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:00:29.863482 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:00:29.863489 kernel: TSC deadline timer available Feb 13 20:00:29.863496 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 20:00:29.863503 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:00:29.863510 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 20:00:29.863517 kernel: kvm-guest: setup PV sched yield Feb 13 20:00:29.863524 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 20:00:29.863534 kernel: Booting paravirtualized kernel on KVM Feb 13 20:00:29.863541 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:00:29.863548 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 20:00:29.863555 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 20:00:29.863563 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 20:00:29.863569 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 20:00:29.863576 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:00:29.863583 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:00:29.863592 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:00:29.863602 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:00:29.863609 kernel: random: crng init done Feb 13 20:00:29.863616 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:00:29.863624 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:00:29.863631 kernel: Fallback order for Node 0: 0 Feb 13 20:00:29.863638 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 20:00:29.863645 kernel: Policy zone: DMA32 Feb 13 20:00:29.863652 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:00:29.863661 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 136900K reserved, 0K cma-reserved) Feb 13 20:00:29.863669 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 20:00:29.863676 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:00:29.863683 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:00:29.863690 kernel: Dynamic Preempt: voluntary Feb 13 20:00:29.863697 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:00:29.863705 kernel: rcu: RCU event tracing is enabled. Feb 13 20:00:29.863713 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 20:00:29.863720 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:00:29.863731 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:00:29.863740 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:00:29.863747 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:00:29.863754 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 20:00:29.863761 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 20:00:29.863769 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:00:29.863775 kernel: Console: colour VGA+ 80x25 Feb 13 20:00:29.863782 kernel: printk: console [ttyS0] enabled Feb 13 20:00:29.863789 kernel: ACPI: Core revision 20230628 Feb 13 20:00:29.863799 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 20:00:29.863806 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:00:29.863813 kernel: x2apic enabled Feb 13 20:00:29.863821 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:00:29.863828 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 20:00:29.863835 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 20:00:29.863842 kernel: kvm-guest: setup PV IPIs Feb 13 20:00:29.863858 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:00:29.863866 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 20:00:29.863873 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 20:00:29.863881 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 20:00:29.863888 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 20:00:29.863898 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 20:00:29.863906 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:00:29.863913 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:00:29.863921 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:00:29.863928 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:00:29.863938 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 20:00:29.863945 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 20:00:29.863953 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:00:29.863960 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:00:29.863968 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 20:00:29.863976 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 20:00:29.863983 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 20:00:29.863991 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:00:29.864001 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:00:29.864008 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:00:29.864016 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:00:29.864023 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 20:00:29.864031 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:00:29.864038 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:00:29.864045 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:00:29.864053 kernel: landlock: Up and running. Feb 13 20:00:29.864060 kernel: SELinux: Initializing. Feb 13 20:00:29.864082 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:00:29.864090 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:00:29.864098 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 20:00:29.864112 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:00:29.864120 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:00:29.864127 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 20:00:29.864135 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 20:00:29.864142 kernel: ... version: 0 Feb 13 20:00:29.864150 kernel: ... bit width: 48 Feb 13 20:00:29.864160 kernel: ... generic registers: 6 Feb 13 20:00:29.864168 kernel: ... value mask: 0000ffffffffffff Feb 13 20:00:29.864175 kernel: ... max period: 00007fffffffffff Feb 13 20:00:29.864182 kernel: ... fixed-purpose events: 0 Feb 13 20:00:29.864190 kernel: ... event mask: 000000000000003f Feb 13 20:00:29.864197 kernel: signal: max sigframe size: 1776 Feb 13 20:00:29.864204 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:00:29.864212 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:00:29.864219 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:00:29.864229 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:00:29.864236 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 20:00:29.864244 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 20:00:29.864252 kernel: smpboot: Max logical packages: 1 Feb 13 20:00:29.864259 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 20:00:29.864266 kernel: devtmpfs: initialized Feb 13 20:00:29.864274 kernel: x86/mm: Memory block size: 128MB Feb 13 20:00:29.864281 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:00:29.864289 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 20:00:29.864299 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:00:29.864306 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:00:29.864314 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:00:29.864321 kernel: audit: type=2000 audit(1739476830.089:1): state=initialized audit_enabled=0 res=1 Feb 13 20:00:29.864329 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:00:29.864336 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:00:29.864344 kernel: cpuidle: using governor menu Feb 13 20:00:29.864351 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:00:29.864358 kernel: dca service started, version 1.12.1 Feb 13 20:00:29.864369 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 20:00:29.864376 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 20:00:29.864384 kernel: PCI: Using configuration type 1 for base access Feb 13 20:00:29.864391 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:00:29.864398 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:00:29.864406 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:00:29.864413 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:00:29.864421 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:00:29.864428 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:00:29.864438 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:00:29.864445 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:00:29.864452 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:00:29.864460 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:00:29.864467 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:00:29.864475 kernel: ACPI: Interpreter enabled Feb 13 20:00:29.864482 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 20:00:29.864489 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:00:29.864497 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:00:29.864506 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:00:29.864514 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 20:00:29.864521 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:00:29.864792 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:00:29.865012 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 20:00:29.865185 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 20:00:29.865197 kernel: PCI host bridge to bus 0000:00 Feb 13 20:00:29.865329 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:00:29.865445 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:00:29.865558 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:00:29.865671 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 20:00:29.865786 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 20:00:29.865896 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 20:00:29.866007 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:00:29.866223 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 20:00:29.866389 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 20:00:29.866515 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 20:00:29.866636 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 20:00:29.866757 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 20:00:29.866878 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:00:29.867009 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:00:29.867215 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 20:00:29.867342 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 20:00:29.867465 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 20:00:29.867596 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:00:29.867719 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:00:29.867845 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 20:00:29.867972 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 20:00:29.868141 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:00:29.868267 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 20:00:29.868388 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 20:00:29.868510 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 20:00:29.868632 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 20:00:29.868765 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 20:00:29.868892 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 20:00:29.869020 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 20:00:29.869193 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 20:00:29.869314 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 20:00:29.869442 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 20:00:29.869562 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 20:00:29.869572 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:00:29.869584 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:00:29.869592 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:00:29.869599 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:00:29.869607 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 20:00:29.869615 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 20:00:29.869623 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 20:00:29.869631 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 20:00:29.869638 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 20:00:29.869646 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 20:00:29.869655 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 20:00:29.869663 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 20:00:29.869671 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 20:00:29.869678 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 20:00:29.869686 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 20:00:29.869693 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 20:00:29.869701 kernel: iommu: Default domain type: Translated Feb 13 20:00:29.869708 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:00:29.869716 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:00:29.869725 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:00:29.869733 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:00:29.869740 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 20:00:29.869865 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 20:00:29.869991 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 20:00:29.870135 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:00:29.870146 kernel: vgaarb: loaded Feb 13 20:00:29.870154 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 20:00:29.870166 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 20:00:29.870174 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:00:29.870181 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:00:29.870189 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:00:29.870196 kernel: pnp: PnP ACPI init Feb 13 20:00:29.870333 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 20:00:29.870344 kernel: pnp: PnP ACPI: found 6 devices Feb 13 20:00:29.870352 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:00:29.870363 kernel: NET: Registered PF_INET protocol family Feb 13 20:00:29.870370 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:00:29.870378 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:00:29.870386 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:00:29.870393 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:00:29.870401 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:00:29.870409 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:00:29.870416 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:00:29.870424 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:00:29.870434 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:00:29.870441 kernel: NET: Registered PF_XDP protocol family Feb 13 20:00:29.870556 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:00:29.870669 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:00:29.870781 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:00:29.870893 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 20:00:29.871004 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 20:00:29.871139 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 20:00:29.871155 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:00:29.871162 kernel: Initialise system trusted keyrings Feb 13 20:00:29.871171 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:00:29.871179 kernel: Key type asymmetric registered Feb 13 20:00:29.871186 kernel: Asymmetric key parser 'x509' registered Feb 13 20:00:29.871194 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:00:29.871201 kernel: io scheduler mq-deadline registered Feb 13 20:00:29.871209 kernel: io scheduler kyber registered Feb 13 20:00:29.871216 kernel: io scheduler bfq registered Feb 13 20:00:29.871224 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:00:29.871235 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 20:00:29.871243 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 20:00:29.871250 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 20:00:29.871258 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:00:29.871266 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:00:29.871273 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:00:29.871281 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:00:29.871289 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:00:29.871417 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 20:00:29.871432 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:00:29.871546 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 20:00:29.871661 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T20:00:29 UTC (1739476829) Feb 13 20:00:29.871776 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 20:00:29.871786 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 20:00:29.871794 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:00:29.871801 kernel: Segment Routing with IPv6 Feb 13 20:00:29.871812 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:00:29.871819 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:00:29.871827 kernel: Key type dns_resolver registered Feb 13 20:00:29.871834 kernel: IPI shorthand broadcast: enabled Feb 13 20:00:29.871842 kernel: sched_clock: Marking stable (555002234, 114472075)->(721973530, -52499221) Feb 13 20:00:29.871850 kernel: registered taskstats version 1 Feb 13 20:00:29.871857 kernel: Loading compiled-in X.509 certificates Feb 13 20:00:29.871865 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:00:29.871872 kernel: Key type .fscrypt registered Feb 13 20:00:29.871882 kernel: Key type fscrypt-provisioning registered Feb 13 20:00:29.871890 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:00:29.871897 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:00:29.871905 kernel: ima: No architecture policies found Feb 13 20:00:29.871912 kernel: clk: Disabling unused clocks Feb 13 20:00:29.871920 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:00:29.871927 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:00:29.871935 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:00:29.871942 kernel: Run /init as init process Feb 13 20:00:29.871953 kernel: with arguments: Feb 13 20:00:29.871960 kernel: /init Feb 13 20:00:29.871967 kernel: with environment: Feb 13 20:00:29.871975 kernel: HOME=/ Feb 13 20:00:29.871983 kernel: TERM=linux Feb 13 20:00:29.871990 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:00:29.872000 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:00:29.872009 systemd[1]: Detected virtualization kvm. Feb 13 20:00:29.872021 systemd[1]: Detected architecture x86-64. Feb 13 20:00:29.872029 systemd[1]: Running in initrd. Feb 13 20:00:29.872036 systemd[1]: No hostname configured, using default hostname. Feb 13 20:00:29.872044 systemd[1]: Hostname set to . Feb 13 20:00:29.872053 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:00:29.872060 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:00:29.872082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:00:29.872090 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:00:29.872110 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:00:29.872131 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:00:29.872141 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:00:29.872150 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:00:29.872160 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:00:29.872171 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:00:29.872180 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:00:29.872188 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:00:29.872197 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:00:29.872205 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:00:29.872213 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:00:29.872221 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:00:29.872229 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:00:29.872240 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:00:29.872249 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:00:29.872257 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:00:29.872265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:00:29.872273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:00:29.872282 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:00:29.872290 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:00:29.872299 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:00:29.872307 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:00:29.872317 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:00:29.872326 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:00:29.872334 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:00:29.872342 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:00:29.872350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:00:29.872358 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:00:29.872367 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:00:29.872375 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:00:29.872404 systemd-journald[192]: Collecting audit messages is disabled. Feb 13 20:00:29.872426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:00:29.872439 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:00:29.872448 systemd-journald[192]: Journal started Feb 13 20:00:29.872467 systemd-journald[192]: Runtime Journal (/run/log/journal/26ad58365bfc4f4a9adafa71e7c28e15) is 6.0M, max 48.4M, 42.3M free. Feb 13 20:00:29.865056 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 20:00:29.903404 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:00:29.903419 kernel: Bridge firewalling registered Feb 13 20:00:29.891433 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 20:00:29.905095 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:00:29.908095 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:00:29.909404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:00:29.923270 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:00:29.924136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:00:29.924906 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:00:29.929730 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:00:29.938790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:00:29.939724 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:00:29.944399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:00:29.945315 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:00:29.953443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:00:29.954944 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:00:29.964616 dracut-cmdline[228]: dracut-dracut-053 Feb 13 20:00:29.968028 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:00:29.987107 systemd-resolved[233]: Positive Trust Anchors: Feb 13 20:00:29.987122 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:00:29.987160 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:00:29.989698 systemd-resolved[233]: Defaulting to hostname 'linux'. Feb 13 20:00:29.990680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:00:29.996644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:00:30.063091 kernel: SCSI subsystem initialized Feb 13 20:00:30.072088 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:00:30.133105 kernel: iscsi: registered transport (tcp) Feb 13 20:00:30.153330 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:00:30.153358 kernel: QLogic iSCSI HBA Driver Feb 13 20:00:30.203819 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:00:30.214206 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:00:30.239541 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:00:30.239585 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:00:30.240576 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:00:30.281099 kernel: raid6: avx2x4 gen() 27234 MB/s Feb 13 20:00:30.327095 kernel: raid6: avx2x2 gen() 27493 MB/s Feb 13 20:00:30.344175 kernel: raid6: avx2x1 gen() 24347 MB/s Feb 13 20:00:30.344193 kernel: raid6: using algorithm avx2x2 gen() 27493 MB/s Feb 13 20:00:30.362188 kernel: raid6: .... xor() 19883 MB/s, rmw enabled Feb 13 20:00:30.362214 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:00:30.382097 kernel: xor: automatically using best checksumming function avx Feb 13 20:00:30.535116 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:00:30.547635 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:00:30.561208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:00:30.572383 systemd-udevd[414]: Using default interface naming scheme 'v255'. Feb 13 20:00:30.577122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:00:30.587252 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:00:30.601461 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Feb 13 20:00:30.632914 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:00:30.646293 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:00:30.706426 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:00:30.714208 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:00:30.729317 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:00:30.732351 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:00:30.735446 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:00:30.741091 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 20:00:30.769089 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 20:00:30.769387 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:00:30.769412 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:00:30.769437 kernel: GPT:9289727 != 19775487 Feb 13 20:00:30.769448 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:00:30.769458 kernel: GPT:9289727 != 19775487 Feb 13 20:00:30.769468 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:00:30.769477 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:00:30.738119 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:00:30.747237 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:00:30.760748 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:00:30.764335 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:00:30.764446 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:00:30.766046 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:00:30.779819 kernel: libata version 3.00 loaded. Feb 13 20:00:30.767367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:00:30.767539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:00:30.783041 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:00:30.772127 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:00:30.782880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:00:30.786197 kernel: AES CTR mode by8 optimization enabled Feb 13 20:00:30.791526 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 20:00:30.812450 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 20:00:30.812470 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 20:00:30.812625 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 20:00:30.812762 kernel: scsi host0: ahci Feb 13 20:00:30.813026 kernel: scsi host1: ahci Feb 13 20:00:30.813214 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (470) Feb 13 20:00:30.813227 kernel: scsi host2: ahci Feb 13 20:00:30.813369 kernel: scsi host3: ahci Feb 13 20:00:30.813510 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (463) Feb 13 20:00:30.813527 kernel: scsi host4: ahci Feb 13 20:00:30.813668 kernel: scsi host5: ahci Feb 13 20:00:30.813830 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 20:00:30.813841 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 20:00:30.813851 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 20:00:30.813861 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 20:00:30.813871 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 20:00:30.813885 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 20:00:30.814628 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:00:30.847427 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:00:30.848897 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:00:30.859759 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:00:30.864692 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:00:30.865944 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:00:30.878191 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:00:30.879978 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:00:30.887839 disk-uuid[556]: Primary Header is updated. Feb 13 20:00:30.887839 disk-uuid[556]: Secondary Entries is updated. Feb 13 20:00:30.887839 disk-uuid[556]: Secondary Header is updated. Feb 13 20:00:30.893100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:00:30.897097 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:00:30.903047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:00:31.125118 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 20:00:31.125185 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 20:00:31.125196 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 20:00:31.126093 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 20:00:31.127105 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 20:00:31.128091 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 20:00:31.129102 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 20:00:31.129118 kernel: ata3.00: applying bridge limits Feb 13 20:00:31.130089 kernel: ata3.00: configured for UDMA/100 Feb 13 20:00:31.132105 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 20:00:31.177608 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 20:00:31.189632 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:00:31.189645 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 20:00:31.898097 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:00:31.898216 disk-uuid[558]: The operation has completed successfully. Feb 13 20:00:31.928420 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:00:31.928607 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:00:31.959244 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:00:31.964792 sh[593]: Success Feb 13 20:00:31.978090 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 20:00:32.013377 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:00:32.022504 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:00:32.025708 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:00:32.037739 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:00:32.037773 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:00:32.037788 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:00:32.039514 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:00:32.039539 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:00:32.044350 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:00:32.045163 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:00:32.055215 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:00:32.055938 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:00:32.066301 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:00:32.066325 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:00:32.066336 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:00:32.069110 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:00:32.078652 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:00:32.080584 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:00:32.090413 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:00:32.098196 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:00:32.154858 ignition[687]: Ignition 2.19.0 Feb 13 20:00:32.154870 ignition[687]: Stage: fetch-offline Feb 13 20:00:32.154907 ignition[687]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:00:32.154917 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:00:32.155025 ignition[687]: parsed url from cmdline: "" Feb 13 20:00:32.155029 ignition[687]: no config URL provided Feb 13 20:00:32.155035 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:00:32.155054 ignition[687]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:00:32.155107 ignition[687]: op(1): [started] loading QEMU firmware config module Feb 13 20:00:32.155113 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 20:00:32.163911 ignition[687]: op(1): [finished] loading QEMU firmware config module Feb 13 20:00:32.174837 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:00:32.184198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:00:32.205224 systemd-networkd[780]: lo: Link UP Feb 13 20:00:32.205234 systemd-networkd[780]: lo: Gained carrier Feb 13 20:00:32.206762 systemd-networkd[780]: Enumeration completed Feb 13 20:00:32.206850 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:00:32.207831 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:00:32.207836 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:00:32.208777 systemd[1]: Reached target network.target - Network. Feb 13 20:00:32.210419 systemd-networkd[780]: eth0: Link UP Feb 13 20:00:32.216664 ignition[687]: parsing config with SHA512: 612fe9297ac29691a9ded04a04fbcdfd461bcad7efd9ac61f74eeea82d78e9d7045ba2b534c6f199bd1c635c88417423566f95bc142bd7aac128275e7fc178aa Feb 13 20:00:32.210423 systemd-networkd[780]: eth0: Gained carrier Feb 13 20:00:32.210430 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:00:32.222164 unknown[687]: fetched base config from "system" Feb 13 20:00:32.222177 unknown[687]: fetched user config from "qemu" Feb 13 20:00:32.222577 ignition[687]: fetch-offline: fetch-offline passed Feb 13 20:00:32.222656 ignition[687]: Ignition finished successfully Feb 13 20:00:32.225325 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:00:32.226340 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:00:32.226841 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 20:00:32.236207 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:00:32.251913 ignition[785]: Ignition 2.19.0 Feb 13 20:00:32.251925 ignition[785]: Stage: kargs Feb 13 20:00:32.252127 ignition[785]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:00:32.252138 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:00:32.255852 ignition[785]: kargs: kargs passed Feb 13 20:00:32.255899 ignition[785]: Ignition finished successfully Feb 13 20:00:32.260430 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:00:32.268295 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:00:32.280476 ignition[795]: Ignition 2.19.0 Feb 13 20:00:32.280488 ignition[795]: Stage: disks Feb 13 20:00:32.280650 ignition[795]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:00:32.280661 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:00:32.281503 ignition[795]: disks: disks passed Feb 13 20:00:32.281546 ignition[795]: Ignition finished successfully Feb 13 20:00:32.287259 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:00:32.289518 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:00:32.290712 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:00:32.292963 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:00:32.293997 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:00:32.296212 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:00:32.307203 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:00:32.318899 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:00:32.325462 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:00:32.339155 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:00:32.423095 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:00:32.423278 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:00:32.425489 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:00:32.439166 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:00:32.441635 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:00:32.444106 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:00:32.444148 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:00:32.446017 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:00:32.448094 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (813) Feb 13 20:00:32.451212 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:00:32.451231 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:00:32.451242 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:00:32.455089 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:00:32.456165 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:00:32.458085 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:00:32.461505 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:00:32.498080 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:00:32.503519 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:00:32.508461 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:00:32.512080 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:00:32.602157 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:00:32.612220 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:00:32.614258 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:00:32.620092 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:00:32.639940 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:00:32.641951 ignition[928]: INFO : Ignition 2.19.0 Feb 13 20:00:32.641951 ignition[928]: INFO : Stage: mount Feb 13 20:00:32.641951 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:00:32.641951 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:00:32.641951 ignition[928]: INFO : mount: mount passed Feb 13 20:00:32.641951 ignition[928]: INFO : Ignition finished successfully Feb 13 20:00:32.643873 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:00:32.655159 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:00:33.037317 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:00:33.092352 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:00:33.104102 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) Feb 13 20:00:33.106182 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:00:33.106209 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:00:33.106223 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:00:33.109106 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:00:33.111256 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:00:33.141783 ignition[958]: INFO : Ignition 2.19.0 Feb 13 20:00:33.141783 ignition[958]: INFO : Stage: files Feb 13 20:00:33.148872 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:00:33.148872 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:00:33.148872 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:00:33.148872 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:00:33.148872 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:00:33.155335 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:00:33.155335 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:00:33.155335 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:00:33.155335 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:00:33.155335 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:00:33.151302 unknown[958]: wrote ssh authorized keys file for user: core Feb 13 20:00:33.192365 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:00:33.270896 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:00:33.273197 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:00:33.642742 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:00:33.758235 systemd-networkd[780]: eth0: Gained IPv6LL Feb 13 20:00:33.960737 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:00:33.960737 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:00:33.964540 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:00:33.964540 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:00:33.964540 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:00:33.964540 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 20:00:33.964540 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:00:33.964540 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 20:00:33.964540 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 20:00:33.964540 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 20:00:33.986103 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:00:33.991422 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 20:00:33.993142 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 20:00:33.993142 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:00:33.995951 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:00:33.997404 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:00:33.999459 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:00:34.001406 ignition[958]: INFO : files: files passed Feb 13 20:00:34.002318 ignition[958]: INFO : Ignition finished successfully Feb 13 20:00:34.006280 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:00:34.014292 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:00:34.016109 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:00:34.022370 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:00:34.022591 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:00:34.026171 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 20:00:34.030055 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:00:34.030055 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:00:34.033192 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:00:34.037118 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:00:34.039805 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:00:34.049205 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:00:34.072548 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:00:34.073613 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:00:34.076320 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:00:34.078398 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:00:34.080463 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:00:34.095193 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:00:34.110903 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:00:34.129205 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:00:34.138105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:00:34.140498 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:00:34.142903 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:00:34.144778 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:00:34.145807 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:00:34.148374 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:00:34.150458 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:00:34.152331 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:00:34.154548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:00:34.156873 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:00:34.159156 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:00:34.161253 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:00:34.163740 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:00:34.165832 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:00:34.167900 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:00:34.169563 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:00:34.170569 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:00:34.172841 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:00:34.175062 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:00:34.177424 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:00:34.178403 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:00:34.180971 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:00:34.182002 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:00:34.184247 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:00:34.185346 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:00:34.187729 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:00:34.189562 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:00:34.193156 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:00:34.195934 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:00:34.197811 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:00:34.199727 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:00:34.200636 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:00:34.202624 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:00:34.203529 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:00:34.205628 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:00:34.206831 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:00:34.209439 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:00:34.210446 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:00:34.226251 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:00:34.229384 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:00:34.231572 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:00:34.232967 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:00:34.236011 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:00:34.237404 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:00:34.242088 ignition[1012]: INFO : Ignition 2.19.0 Feb 13 20:00:34.243557 ignition[1012]: INFO : Stage: umount Feb 13 20:00:34.243557 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:00:34.243557 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 20:00:34.243557 ignition[1012]: INFO : umount: umount passed Feb 13 20:00:34.243557 ignition[1012]: INFO : Ignition finished successfully Feb 13 20:00:34.246466 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:00:34.246605 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:00:34.250387 systemd[1]: Stopped target network.target - Network. Feb 13 20:00:34.251857 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:00:34.251938 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:00:34.253793 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:00:34.253842 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:00:34.255899 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:00:34.255967 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:00:34.258011 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:00:34.258066 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:00:34.260324 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:00:34.262308 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:00:34.265221 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:00:34.265777 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:00:34.265893 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:00:34.273127 systemd-networkd[780]: eth0: DHCPv6 lease lost Feb 13 20:00:34.276139 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:00:34.276319 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:00:34.279460 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:00:34.279628 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:00:34.282408 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:00:34.282474 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:00:34.293214 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:00:34.295247 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:00:34.295320 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:00:34.297566 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:00:34.297616 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:00:34.300040 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:00:34.300113 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:00:34.302089 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:00:34.302137 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:00:34.304418 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:00:34.315642 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:00:34.315788 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:00:34.320912 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:00:34.321122 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:00:34.323298 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:00:34.323346 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:00:34.325376 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:00:34.325413 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:00:34.327373 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:00:34.327421 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:00:34.329509 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:00:34.329556 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:00:34.331898 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:00:34.331987 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:00:34.351232 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:00:34.352381 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:00:34.352452 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:00:34.354755 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:00:34.354818 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:00:34.357044 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:00:34.357114 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:00:34.359525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:00:34.359588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:00:34.362215 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:00:34.362350 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:00:34.434040 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:00:34.434197 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:00:34.436261 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:00:34.437933 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:00:34.437992 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:00:34.457200 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:00:34.464978 systemd[1]: Switching root. Feb 13 20:00:34.497862 systemd-journald[192]: Journal stopped Feb 13 20:00:35.647672 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Feb 13 20:00:35.647747 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:00:35.647774 kernel: SELinux: policy capability open_perms=1 Feb 13 20:00:35.647792 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:00:35.647810 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:00:35.647833 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:00:35.647844 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:00:35.647859 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:00:35.647871 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:00:35.647883 kernel: audit: type=1403 audit(1739476834.883:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:00:35.647901 systemd[1]: Successfully loaded SELinux policy in 40.742ms. Feb 13 20:00:35.647917 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.105ms. Feb 13 20:00:35.647930 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:00:35.647951 systemd[1]: Detected virtualization kvm. Feb 13 20:00:35.647963 systemd[1]: Detected architecture x86-64. Feb 13 20:00:35.647978 systemd[1]: Detected first boot. Feb 13 20:00:35.647989 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:00:35.648001 zram_generator::config[1057]: No configuration found. Feb 13 20:00:35.648014 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:00:35.648026 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:00:35.648038 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:00:35.648050 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:00:35.648062 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:00:35.648097 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:00:35.648113 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:00:35.648124 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:00:35.648136 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:00:35.648148 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:00:35.648161 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:00:35.648173 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:00:35.648185 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:00:35.648197 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:00:35.648211 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:00:35.648230 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:00:35.648242 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:00:35.648254 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:00:35.648266 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:00:35.648278 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:00:35.648290 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:00:35.648302 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:00:35.648314 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:00:35.648328 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:00:35.648340 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:00:35.648353 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:00:35.648365 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:00:35.648377 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:00:35.648389 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:00:35.648724 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:00:35.648739 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:00:35.648754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:00:35.648766 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:00:35.648778 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:00:35.648790 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:00:35.648801 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:00:35.648813 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:00:35.648825 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:00:35.648837 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:00:35.648849 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:00:35.648865 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:00:35.648877 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:00:35.648889 systemd[1]: Reached target machines.target - Containers. Feb 13 20:00:35.648901 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:00:35.648913 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:00:35.648925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:00:35.648936 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:00:35.648956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:00:35.648971 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:00:35.648983 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:00:35.648995 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:00:35.649006 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:00:35.649019 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:00:35.649031 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:00:35.649042 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:00:35.649054 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:00:35.649066 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:00:35.649097 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:00:35.649109 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:00:35.649121 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:00:35.649132 kernel: fuse: init (API version 7.39) Feb 13 20:00:35.649145 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:00:35.649176 systemd-journald[1120]: Collecting audit messages is disabled. Feb 13 20:00:35.649198 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:00:35.649212 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:00:35.649224 systemd-journald[1120]: Journal started Feb 13 20:00:35.649245 systemd-journald[1120]: Runtime Journal (/run/log/journal/26ad58365bfc4f4a9adafa71e7c28e15) is 6.0M, max 48.4M, 42.3M free. Feb 13 20:00:35.421204 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:00:35.440422 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:00:35.440868 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:00:35.650129 systemd[1]: Stopped verity-setup.service. Feb 13 20:00:35.653103 kernel: loop: module loaded Feb 13 20:00:35.653134 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:00:35.657188 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:00:35.658877 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:00:35.660100 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:00:35.661424 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:00:35.662554 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:00:35.663801 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:00:35.665061 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:00:35.666353 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:00:35.667952 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:00:35.668147 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:00:35.669682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:00:35.669857 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:00:35.671632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:00:35.671845 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:00:35.673506 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:00:35.673692 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:00:35.675157 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:00:35.675314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:00:35.677091 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:00:35.678643 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:00:35.680270 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:00:35.692578 kernel: ACPI: bus type drm_connector registered Feb 13 20:00:35.692961 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:00:35.693696 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:00:35.715680 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:00:35.724174 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:00:35.726525 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:00:35.727667 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:00:35.727695 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:00:35.729667 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:00:35.731997 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:00:35.738239 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:00:35.739415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:00:35.757298 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:00:35.760660 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:00:35.761969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:00:35.764659 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:00:35.766043 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:00:35.767889 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:00:35.774358 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:00:35.778254 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:00:35.781578 systemd-journald[1120]: Time spent on flushing to /var/log/journal/26ad58365bfc4f4a9adafa71e7c28e15 is 14.224ms for 951 entries. Feb 13 20:00:35.781578 systemd-journald[1120]: System Journal (/var/log/journal/26ad58365bfc4f4a9adafa71e7c28e15) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:00:36.097976 systemd-journald[1120]: Received client request to flush runtime journal. Feb 13 20:00:36.098039 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 20:00:36.098083 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:00:36.098119 kernel: loop1: detected capacity change from 0 to 142488 Feb 13 20:00:36.098785 kernel: loop2: detected capacity change from 0 to 140768 Feb 13 20:00:36.098816 kernel: loop3: detected capacity change from 0 to 210664 Feb 13 20:00:36.098837 kernel: loop4: detected capacity change from 0 to 142488 Feb 13 20:00:36.098856 kernel: loop5: detected capacity change from 0 to 140768 Feb 13 20:00:35.781394 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:00:35.784011 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:00:35.785297 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:00:35.786746 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:00:35.797373 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:00:35.812409 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:00:35.848949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:00:35.854464 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Feb 13 20:00:35.854478 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Feb 13 20:00:35.860640 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:00:36.033601 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:00:36.035084 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:00:36.047216 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:00:36.083678 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 20:00:36.084273 (sd-merge)[1186]: Merged extensions into '/usr'. Feb 13 20:00:36.089142 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:00:36.089151 systemd[1]: Reloading... Feb 13 20:00:36.153094 zram_generator::config[1215]: No configuration found. Feb 13 20:00:36.226403 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:00:36.267520 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:00:36.317484 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:00:36.318167 systemd[1]: Reloading finished in 228 ms. Feb 13 20:00:36.351320 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:00:36.352987 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:00:36.354459 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:00:36.355998 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:00:36.357544 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:00:36.374267 systemd[1]: Starting ensure-sysext.service... Feb 13 20:00:36.376497 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:00:36.382487 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:00:36.382503 systemd[1]: Reloading... Feb 13 20:00:36.438104 zram_generator::config[1284]: No configuration found. Feb 13 20:00:36.552147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:00:36.601694 systemd[1]: Reloading finished in 218 ms. Feb 13 20:00:36.625566 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:00:36.639643 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:00:36.648664 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:00:36.651046 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:00:36.653715 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:00:36.653947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:00:36.655043 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:00:36.659317 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:00:36.663288 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:00:36.664398 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:00:36.664507 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:00:36.665515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:00:36.665807 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:00:36.667435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:00:36.667608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:00:36.670709 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:00:36.670986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:00:36.673760 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Feb 13 20:00:36.673780 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Feb 13 20:00:36.675877 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:00:36.676143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:00:36.681298 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:00:36.681600 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:00:36.682513 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:00:36.682783 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Feb 13 20:00:36.682858 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Feb 13 20:00:36.686158 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:00:36.686165 systemd-tmpfiles[1324]: Skipping /boot Feb 13 20:00:36.686387 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:00:36.688504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:00:36.690642 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:00:36.691731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:00:36.691900 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:00:36.693036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:00:36.694939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:00:36.695206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:00:36.696815 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:00:36.696989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:00:36.698628 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:00:36.698794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:00:36.704195 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:00:36.704208 systemd-tmpfiles[1324]: Skipping /boot Feb 13 20:00:36.705462 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:00:36.705700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:00:36.712306 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:00:36.714368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:00:36.717350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:00:36.721495 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:00:36.722650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:00:36.724409 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:00:36.725576 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:00:36.727004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:00:36.727203 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:00:36.728789 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:00:36.728978 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:00:36.730687 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:00:36.737200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:00:36.738967 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:00:36.740659 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:00:36.740838 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:00:36.744756 systemd[1]: Finished ensure-sysext.service. Feb 13 20:00:36.757269 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:00:36.759875 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:00:36.760003 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Feb 13 20:00:36.762984 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:00:36.764279 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:00:36.764343 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:00:36.768447 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:00:36.773252 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:00:36.776257 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:00:36.780027 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:00:36.792332 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:00:36.796643 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:00:36.800681 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:00:36.802436 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:00:36.815262 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:00:36.832240 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:00:36.837348 augenrules[1394]: No rules Feb 13 20:00:36.834188 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:00:36.838421 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:00:36.856632 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:00:36.858459 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:00:36.860457 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:00:36.882100 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1386) Feb 13 20:00:36.913096 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:00:36.918086 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:00:36.926209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:00:36.934286 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:00:36.950125 systemd-networkd[1376]: lo: Link UP Feb 13 20:00:36.950138 systemd-networkd[1376]: lo: Gained carrier Feb 13 20:00:36.951690 systemd-networkd[1376]: Enumeration completed Feb 13 20:00:36.952815 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:00:36.954639 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:00:36.954644 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:00:36.955609 systemd-networkd[1376]: eth0: Link UP Feb 13 20:00:36.955656 systemd-networkd[1376]: eth0: Gained carrier Feb 13 20:00:36.955701 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:00:36.959124 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:00:36.965322 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:00:36.967195 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:00:36.973029 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:00:36.976427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:00:36.983087 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 20:00:36.985304 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 20:00:36.985479 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 20:00:36.990667 systemd-resolved[1361]: Positive Trust Anchors: Feb 13 20:00:36.990855 systemd-resolved[1361]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:00:36.990887 systemd-resolved[1361]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:00:36.993962 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:00:36.995389 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:00:38.188876 systemd-timesyncd[1362]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 20:00:38.188969 systemd-timesyncd[1362]: Initial clock synchronization to Thu 2025-02-13 20:00:38.188719 UTC. Feb 13 20:00:38.190711 systemd-resolved[1361]: Defaulting to hostname 'linux'. Feb 13 20:00:38.192730 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:00:38.195130 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:00:38.198716 systemd[1]: Reached target network.target - Network. Feb 13 20:00:38.198787 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:00:38.277883 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:00:38.288258 kernel: kvm_amd: TSC scaling supported Feb 13 20:00:38.288322 kernel: kvm_amd: Nested Virtualization enabled Feb 13 20:00:38.288337 kernel: kvm_amd: Nested Paging enabled Feb 13 20:00:38.288349 kernel: kvm_amd: LBR virtualization supported Feb 13 20:00:38.289395 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 20:00:38.289419 kernel: kvm_amd: Virtual GIF supported Feb 13 20:00:38.308122 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:00:38.335455 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:00:38.348283 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:00:38.357143 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:00:38.386512 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:00:38.388120 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:00:38.389288 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:00:38.390481 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:00:38.391775 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:00:38.393281 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:00:38.394677 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:00:38.395978 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:00:38.397258 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:00:38.397284 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:00:38.398241 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:00:38.399912 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:00:38.402663 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:00:38.412755 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:00:38.415071 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:00:38.416631 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:00:38.417866 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:00:38.418892 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:00:38.419946 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:00:38.419975 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:00:38.420965 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:00:38.423242 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:00:38.425208 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:00:38.428189 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:00:38.431676 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:00:38.434262 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:00:38.435275 jq[1442]: false Feb 13 20:00:38.436196 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:00:38.438539 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:00:38.444252 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:00:38.451248 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:00:38.456299 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:00:38.456430 extend-filesystems[1443]: Found loop3 Feb 13 20:00:38.458058 extend-filesystems[1443]: Found loop4 Feb 13 20:00:38.458058 extend-filesystems[1443]: Found loop5 Feb 13 20:00:38.458058 extend-filesystems[1443]: Found sr0 Feb 13 20:00:38.458058 extend-filesystems[1443]: Found vda Feb 13 20:00:38.458058 extend-filesystems[1443]: Found vda1 Feb 13 20:00:38.458058 extend-filesystems[1443]: Found vda2 Feb 13 20:00:38.458058 extend-filesystems[1443]: Found vda3 Feb 13 20:00:38.458058 extend-filesystems[1443]: Found usr Feb 13 20:00:38.458058 extend-filesystems[1443]: Found vda4 Feb 13 20:00:38.458058 extend-filesystems[1443]: Found vda6 Feb 13 20:00:38.458058 extend-filesystems[1443]: Found vda7 Feb 13 20:00:38.458058 extend-filesystems[1443]: Found vda9 Feb 13 20:00:38.458058 extend-filesystems[1443]: Checking size of /dev/vda9 Feb 13 20:00:38.492571 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 20:00:38.457715 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:00:38.492714 extend-filesystems[1443]: Resized partition /dev/vda9 Feb 13 20:00:38.465838 dbus-daemon[1441]: [system] SELinux support is enabled Feb 13 20:00:38.458178 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:00:38.494253 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:00:38.460106 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:00:38.463012 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:00:38.495569 jq[1458]: true Feb 13 20:00:38.465487 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:00:38.466859 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:00:38.474085 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:00:38.475407 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:00:38.475748 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:00:38.475956 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:00:38.479857 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:00:38.480415 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:00:38.494067 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:00:38.496458 jq[1467]: true Feb 13 20:00:38.504985 update_engine[1456]: I20250213 20:00:38.504653 1456 main.cc:92] Flatcar Update Engine starting Feb 13 20:00:38.515368 update_engine[1456]: I20250213 20:00:38.506081 1456 update_check_scheduler.cc:74] Next update check in 5m11s Feb 13 20:00:38.518636 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:00:38.527178 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:00:38.527215 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:00:38.529189 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:00:38.529208 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:00:38.533121 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1388) Feb 13 20:00:38.535733 tar[1465]: linux-amd64/helm Feb 13 20:00:38.563713 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 20:00:38.543247 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:00:38.564361 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:00:38.564388 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:00:38.565843 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:00:38.565843 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 20:00:38.565843 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 20:00:38.565789 systemd-logind[1454]: New seat seat0. Feb 13 20:00:38.581629 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Feb 13 20:00:38.567796 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:00:38.568020 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:00:38.575386 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:00:38.584765 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:00:38.585567 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:00:38.586433 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:00:38.587408 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:00:38.592793 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 20:00:38.610236 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:00:38.620326 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:00:38.626717 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:00:38.626964 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:00:38.630852 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:00:38.647073 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:00:38.655343 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:00:38.657417 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:00:38.659070 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:00:38.699290 containerd[1466]: time="2025-02-13T20:00:38.699067554Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:00:38.722154 containerd[1466]: time="2025-02-13T20:00:38.722119627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:00:38.724142 containerd[1466]: time="2025-02-13T20:00:38.723883464Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:00:38.724142 containerd[1466]: time="2025-02-13T20:00:38.723918049Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:00:38.724142 containerd[1466]: time="2025-02-13T20:00:38.723934330Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:00:38.724142 containerd[1466]: time="2025-02-13T20:00:38.724128053Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:00:38.724142 containerd[1466]: time="2025-02-13T20:00:38.724145195Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:00:38.724343 containerd[1466]: time="2025-02-13T20:00:38.724219154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:00:38.724343 containerd[1466]: time="2025-02-13T20:00:38.724232509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:00:38.724469 containerd[1466]: time="2025-02-13T20:00:38.724440639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:00:38.724469 containerd[1466]: time="2025-02-13T20:00:38.724460556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:00:38.724518 containerd[1466]: time="2025-02-13T20:00:38.724473481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:00:38.724518 containerd[1466]: time="2025-02-13T20:00:38.724485052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:00:38.724601 containerd[1466]: time="2025-02-13T20:00:38.724578307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:00:38.724851 containerd[1466]: time="2025-02-13T20:00:38.724824970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:00:38.725046 containerd[1466]: time="2025-02-13T20:00:38.724958971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:00:38.725046 containerd[1466]: time="2025-02-13T20:00:38.724977085Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:00:38.725090 containerd[1466]: time="2025-02-13T20:00:38.725071602Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:00:38.725184 containerd[1466]: time="2025-02-13T20:00:38.725158194Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:00:38.731677 containerd[1466]: time="2025-02-13T20:00:38.731628937Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:00:38.731733 containerd[1466]: time="2025-02-13T20:00:38.731688369Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:00:38.731733 containerd[1466]: time="2025-02-13T20:00:38.731707885Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:00:38.731733 containerd[1466]: time="2025-02-13T20:00:38.731724206Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:00:38.731802 containerd[1466]: time="2025-02-13T20:00:38.731739444Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:00:38.731901 containerd[1466]: time="2025-02-13T20:00:38.731870570Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:00:38.732155 containerd[1466]: time="2025-02-13T20:00:38.732124496Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:00:38.732260 containerd[1466]: time="2025-02-13T20:00:38.732232018Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:00:38.732260 containerd[1466]: time="2025-02-13T20:00:38.732252296Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:00:38.732309 containerd[1466]: time="2025-02-13T20:00:38.732264960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:00:38.732309 containerd[1466]: time="2025-02-13T20:00:38.732279327Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:00:38.732309 containerd[1466]: time="2025-02-13T20:00:38.732293273Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:00:38.732309 containerd[1466]: time="2025-02-13T20:00:38.732306938Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:00:38.732377 containerd[1466]: time="2025-02-13T20:00:38.732321967Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:00:38.732377 containerd[1466]: time="2025-02-13T20:00:38.732339149Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:00:38.732377 containerd[1466]: time="2025-02-13T20:00:38.732354087Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:00:38.732377 containerd[1466]: time="2025-02-13T20:00:38.732373714Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:00:38.732450 containerd[1466]: time="2025-02-13T20:00:38.732386327Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:00:38.732450 containerd[1466]: time="2025-02-13T20:00:38.732405643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732450 containerd[1466]: time="2025-02-13T20:00:38.732418598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732450 containerd[1466]: time="2025-02-13T20:00:38.732431211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732450 containerd[1466]: time="2025-02-13T20:00:38.732443885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732544 containerd[1466]: time="2025-02-13T20:00:38.732456318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732544 containerd[1466]: time="2025-02-13T20:00:38.732471286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732544 containerd[1466]: time="2025-02-13T20:00:38.732482447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732544 containerd[1466]: time="2025-02-13T20:00:38.732494290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732544 containerd[1466]: time="2025-02-13T20:00:38.732506913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732544 containerd[1466]: time="2025-02-13T20:00:38.732524306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732544 containerd[1466]: time="2025-02-13T20:00:38.732535848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732544 containerd[1466]: time="2025-02-13T20:00:38.732546858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732685 containerd[1466]: time="2025-02-13T20:00:38.732560313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732685 containerd[1466]: time="2025-02-13T20:00:38.732575442Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:00:38.732685 containerd[1466]: time="2025-02-13T20:00:38.732593856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732685 containerd[1466]: time="2025-02-13T20:00:38.732605378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732685 containerd[1466]: time="2025-02-13T20:00:38.732616569Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:00:38.732685 containerd[1466]: time="2025-02-13T20:00:38.732661894Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:00:38.732685 containerd[1466]: time="2025-02-13T20:00:38.732678896Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:00:38.732685 containerd[1466]: time="2025-02-13T20:00:38.732690477Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:00:38.732830 containerd[1466]: time="2025-02-13T20:00:38.732703251Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:00:38.732830 containerd[1466]: time="2025-02-13T20:00:38.732714042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.732830 containerd[1466]: time="2025-02-13T20:00:38.732726365Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:00:38.732830 containerd[1466]: time="2025-02-13T20:00:38.732736534Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:00:38.732830 containerd[1466]: time="2025-02-13T20:00:38.732747675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:00:38.733071 containerd[1466]: time="2025-02-13T20:00:38.733009656Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:00:38.733071 containerd[1466]: time="2025-02-13T20:00:38.733066743Z" level=info msg="Connect containerd service" Feb 13 20:00:38.733237 containerd[1466]: time="2025-02-13T20:00:38.733120464Z" level=info msg="using legacy CRI server" Feb 13 20:00:38.733237 containerd[1466]: time="2025-02-13T20:00:38.733128910Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:00:38.733281 containerd[1466]: time="2025-02-13T20:00:38.733244206Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:00:38.733861 containerd[1466]: time="2025-02-13T20:00:38.733824955Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:00:38.734168 containerd[1466]: time="2025-02-13T20:00:38.734071307Z" level=info msg="Start subscribing containerd event" Feb 13 20:00:38.734236 containerd[1466]: time="2025-02-13T20:00:38.734204687Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:00:38.734281 containerd[1466]: time="2025-02-13T20:00:38.734262355Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:00:38.734352 containerd[1466]: time="2025-02-13T20:00:38.734322067Z" level=info msg="Start recovering state" Feb 13 20:00:38.734412 containerd[1466]: time="2025-02-13T20:00:38.734393340Z" level=info msg="Start event monitor" Feb 13 20:00:38.734445 containerd[1466]: time="2025-02-13T20:00:38.734420571Z" level=info msg="Start snapshots syncer" Feb 13 20:00:38.734445 containerd[1466]: time="2025-02-13T20:00:38.734434518Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:00:38.734445 containerd[1466]: time="2025-02-13T20:00:38.734443254Z" level=info msg="Start streaming server" Feb 13 20:00:38.734585 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:00:38.738268 containerd[1466]: time="2025-02-13T20:00:38.738236486Z" level=info msg="containerd successfully booted in 0.041280s" Feb 13 20:00:38.783351 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:00:38.785707 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:41108.service - OpenSSH per-connection server daemon (10.0.0.1:41108). Feb 13 20:00:38.828401 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 41108 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:00:38.830747 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:38.838704 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:00:38.845314 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:00:38.848950 systemd-logind[1454]: New session 1 of user core. Feb 13 20:00:38.858195 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:00:38.872337 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:00:38.876018 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:00:38.932080 tar[1465]: linux-amd64/LICENSE Feb 13 20:00:38.932172 tar[1465]: linux-amd64/README.md Feb 13 20:00:38.948321 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:00:38.994204 systemd[1535]: Queued start job for default target default.target. Feb 13 20:00:39.006467 systemd[1535]: Created slice app.slice - User Application Slice. Feb 13 20:00:39.006494 systemd[1535]: Reached target paths.target - Paths. Feb 13 20:00:39.006508 systemd[1535]: Reached target timers.target - Timers. Feb 13 20:00:39.008185 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:00:39.019080 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:00:39.019292 systemd[1535]: Reached target sockets.target - Sockets. Feb 13 20:00:39.019311 systemd[1535]: Reached target basic.target - Basic System. Feb 13 20:00:39.019346 systemd[1535]: Reached target default.target - Main User Target. Feb 13 20:00:39.019387 systemd[1535]: Startup finished in 136ms. Feb 13 20:00:39.019996 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:00:39.022833 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:00:39.088201 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:41110.service - OpenSSH per-connection server daemon (10.0.0.1:41110). Feb 13 20:00:39.123831 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 41110 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:00:39.125383 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:39.129392 systemd-logind[1454]: New session 2 of user core. Feb 13 20:00:39.139239 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:00:39.194517 sshd[1549]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:39.208827 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:41110.service: Deactivated successfully. Feb 13 20:00:39.210565 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:00:39.212106 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:00:39.226401 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:41122.service - OpenSSH per-connection server daemon (10.0.0.1:41122). Feb 13 20:00:39.228599 systemd-logind[1454]: Removed session 2. Feb 13 20:00:39.256720 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 41122 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:00:39.258213 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:39.261933 systemd-logind[1454]: New session 3 of user core. Feb 13 20:00:39.270213 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:00:39.325412 sshd[1556]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:39.329050 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:41122.service: Deactivated successfully. Feb 13 20:00:39.330827 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:00:39.331506 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:00:39.332342 systemd-logind[1454]: Removed session 3. Feb 13 20:00:39.429235 systemd-networkd[1376]: eth0: Gained IPv6LL Feb 13 20:00:39.432488 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:00:39.434566 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:00:39.445316 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 20:00:39.448039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:00:39.450492 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:00:39.468934 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 20:00:39.469230 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 20:00:39.471348 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:00:39.473595 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:00:40.081185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:00:40.083113 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:00:40.084621 systemd[1]: Startup finished in 684ms (kernel) + 5.193s (initrd) + 4.049s (userspace) = 9.927s. Feb 13 20:00:40.097184 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:00:40.540550 kubelet[1584]: E0213 20:00:40.540422 1584 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:00:40.545030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:00:40.545258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:00:49.335504 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:58586.service - OpenSSH per-connection server daemon (10.0.0.1:58586). Feb 13 20:00:49.369817 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 58586 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:00:49.371212 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:49.374704 systemd-logind[1454]: New session 4 of user core. Feb 13 20:00:49.388214 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:00:49.442757 sshd[1598]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:49.458431 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:58586.service: Deactivated successfully. Feb 13 20:00:49.460727 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:00:49.462400 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:00:49.463951 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:58596.service - OpenSSH per-connection server daemon (10.0.0.1:58596). Feb 13 20:00:49.465057 systemd-logind[1454]: Removed session 4. Feb 13 20:00:49.498882 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 58596 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:00:49.500425 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:49.504328 systemd-logind[1454]: New session 5 of user core. Feb 13 20:00:49.518211 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:00:49.567331 sshd[1605]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:49.581542 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:58596.service: Deactivated successfully. Feb 13 20:00:49.583077 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:00:49.584714 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:00:49.585895 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:58602.service - OpenSSH per-connection server daemon (10.0.0.1:58602). Feb 13 20:00:49.586617 systemd-logind[1454]: Removed session 5. Feb 13 20:00:49.619077 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 58602 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:00:49.620503 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:49.623996 systemd-logind[1454]: New session 6 of user core. Feb 13 20:00:49.633209 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:00:49.687732 sshd[1612]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:49.694781 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:58602.service: Deactivated successfully. Feb 13 20:00:49.696615 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:00:49.698322 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:00:49.709390 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:58616.service - OpenSSH per-connection server daemon (10.0.0.1:58616). Feb 13 20:00:49.710300 systemd-logind[1454]: Removed session 6. Feb 13 20:00:49.737665 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 58616 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:00:49.739384 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:49.743558 systemd-logind[1454]: New session 7 of user core. Feb 13 20:00:49.753218 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:00:50.324400 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:00:50.324767 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:00:50.340142 sudo[1622]: pam_unix(sudo:session): session closed for user root Feb 13 20:00:50.342216 sshd[1619]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:50.361993 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:58616.service: Deactivated successfully. Feb 13 20:00:50.363733 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:00:50.365416 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:00:50.366718 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:58622.service - OpenSSH per-connection server daemon (10.0.0.1:58622). Feb 13 20:00:50.367395 systemd-logind[1454]: Removed session 7. Feb 13 20:00:50.400626 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 58622 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:00:50.402043 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:50.405628 systemd-logind[1454]: New session 8 of user core. Feb 13 20:00:50.415206 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:00:50.467929 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:00:50.468261 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:00:50.471871 sudo[1631]: pam_unix(sudo:session): session closed for user root Feb 13 20:00:50.477834 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:00:50.478174 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:00:50.497308 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:00:50.498866 auditctl[1634]: No rules Feb 13 20:00:50.500167 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:00:50.500401 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:00:50.502081 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:00:50.531189 augenrules[1652]: No rules Feb 13 20:00:50.532240 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:00:50.533591 sudo[1630]: pam_unix(sudo:session): session closed for user root Feb 13 20:00:50.535503 sshd[1627]: pam_unix(sshd:session): session closed for user core Feb 13 20:00:50.548944 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:58622.service: Deactivated successfully. Feb 13 20:00:50.550810 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:00:50.551643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:00:50.552045 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:00:50.563280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:00:50.564273 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:58638.service - OpenSSH per-connection server daemon (10.0.0.1:58638). Feb 13 20:00:50.564894 systemd-logind[1454]: Removed session 8. Feb 13 20:00:50.597897 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 58638 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:00:50.599308 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:00:50.602965 systemd-logind[1454]: New session 9 of user core. Feb 13 20:00:50.612202 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:00:50.664825 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:00:50.665228 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:00:50.745564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:00:50.749656 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:00:50.792021 kubelet[1681]: E0213 20:00:50.791938 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:00:50.799523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:00:50.799744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:00:50.960588 (dockerd)[1700]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:00:50.960610 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:00:51.223784 dockerd[1700]: time="2025-02-13T20:00:51.223647809Z" level=info msg="Starting up" Feb 13 20:00:51.563848 systemd[1]: var-lib-docker-metacopy\x2dcheck895095809-merged.mount: Deactivated successfully. Feb 13 20:00:51.590964 dockerd[1700]: time="2025-02-13T20:00:51.590916498Z" level=info msg="Loading containers: start." Feb 13 20:00:51.704143 kernel: Initializing XFRM netlink socket Feb 13 20:00:51.793365 systemd-networkd[1376]: docker0: Link UP Feb 13 20:00:51.821648 dockerd[1700]: time="2025-02-13T20:00:51.821523766Z" level=info msg="Loading containers: done." Feb 13 20:00:51.835759 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2553409011-merged.mount: Deactivated successfully. Feb 13 20:00:51.839063 dockerd[1700]: time="2025-02-13T20:00:51.839005204Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:00:51.839160 dockerd[1700]: time="2025-02-13T20:00:51.839140518Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:00:51.839302 dockerd[1700]: time="2025-02-13T20:00:51.839272375Z" level=info msg="Daemon has completed initialization" Feb 13 20:00:51.884440 dockerd[1700]: time="2025-02-13T20:00:51.884296175Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:00:51.884593 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:00:52.578859 containerd[1466]: time="2025-02-13T20:00:52.578640791Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:00:54.672927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3465551951.mount: Deactivated successfully. Feb 13 20:00:55.709650 containerd[1466]: time="2025-02-13T20:00:55.709593546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:00:55.710301 containerd[1466]: time="2025-02-13T20:00:55.710236051Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 20:00:55.711376 containerd[1466]: time="2025-02-13T20:00:55.711325694Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:00:55.714065 containerd[1466]: time="2025-02-13T20:00:55.714027700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:00:55.715491 containerd[1466]: time="2025-02-13T20:00:55.715456058Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 3.136767198s" Feb 13 20:00:55.715530 containerd[1466]: time="2025-02-13T20:00:55.715490313Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 20:00:55.737928 containerd[1466]: time="2025-02-13T20:00:55.737879032Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:00:57.703300 containerd[1466]: time="2025-02-13T20:00:57.703227079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:00:57.704159 containerd[1466]: time="2025-02-13T20:00:57.704084907Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 20:00:57.705343 containerd[1466]: time="2025-02-13T20:00:57.705302540Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:00:57.708298 containerd[1466]: time="2025-02-13T20:00:57.708257130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:00:57.709530 containerd[1466]: time="2025-02-13T20:00:57.709503056Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.971591534s" Feb 13 20:00:57.709586 containerd[1466]: time="2025-02-13T20:00:57.709531469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 20:00:57.737811 containerd[1466]: time="2025-02-13T20:00:57.737766770Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:00:58.867399 containerd[1466]: time="2025-02-13T20:00:58.867317107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:00:58.870281 containerd[1466]: time="2025-02-13T20:00:58.870208889Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 20:00:58.871868 containerd[1466]: time="2025-02-13T20:00:58.871809270Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:00:58.874461 containerd[1466]: time="2025-02-13T20:00:58.874420265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:00:58.875691 containerd[1466]: time="2025-02-13T20:00:58.875529525Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.137704846s" Feb 13 20:00:58.875691 containerd[1466]: time="2025-02-13T20:00:58.875586763Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 20:00:58.898311 containerd[1466]: time="2025-02-13T20:00:58.898233646Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:00:59.902523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924979374.mount: Deactivated successfully. Feb 13 20:01:00.970375 containerd[1466]: time="2025-02-13T20:01:00.970312753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:00.999681 containerd[1466]: time="2025-02-13T20:01:00.999625585Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 20:01:01.018994 containerd[1466]: time="2025-02-13T20:01:01.018940360Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:01.034874 containerd[1466]: time="2025-02-13T20:01:01.034827719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:01.035552 containerd[1466]: time="2025-02-13T20:01:01.035497455Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.13722163s" Feb 13 20:01:01.035621 containerd[1466]: time="2025-02-13T20:01:01.035552067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 20:01:01.049931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:01:01.058314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:01:01.059904 containerd[1466]: time="2025-02-13T20:01:01.059803829Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:01:01.197585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:01:01.201912 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:01:01.661785 kubelet[1958]: E0213 20:01:01.661646 1958 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:01:01.666008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:01:01.666227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:01:03.580139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount81648430.mount: Deactivated successfully. Feb 13 20:01:09.378188 containerd[1466]: time="2025-02-13T20:01:09.378083341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:09.397743 containerd[1466]: time="2025-02-13T20:01:09.397675106Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 20:01:09.418174 containerd[1466]: time="2025-02-13T20:01:09.418145938Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:09.447490 containerd[1466]: time="2025-02-13T20:01:09.447425958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:09.448566 containerd[1466]: time="2025-02-13T20:01:09.448532193Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 8.388695752s" Feb 13 20:01:09.448566 containerd[1466]: time="2025-02-13T20:01:09.448565255Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:01:09.470155 containerd[1466]: time="2025-02-13T20:01:09.470118076Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:01:10.977087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046955333.mount: Deactivated successfully. Feb 13 20:01:11.198910 containerd[1466]: time="2025-02-13T20:01:11.198847582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:11.226337 containerd[1466]: time="2025-02-13T20:01:11.226281856Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 20:01:11.274745 containerd[1466]: time="2025-02-13T20:01:11.274650372Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:11.310379 containerd[1466]: time="2025-02-13T20:01:11.310330582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:11.311232 containerd[1466]: time="2025-02-13T20:01:11.311192124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.841045584s" Feb 13 20:01:11.311232 containerd[1466]: time="2025-02-13T20:01:11.311228554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 20:01:11.333379 containerd[1466]: time="2025-02-13T20:01:11.333332049Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:01:11.916457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:01:11.926250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:01:12.061735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:01:12.066065 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:01:12.341542 kubelet[2038]: E0213 20:01:12.341383 2038 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:01:12.345541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:01:12.345760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:01:13.919748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111212213.mount: Deactivated successfully. Feb 13 20:01:15.513158 containerd[1466]: time="2025-02-13T20:01:15.513060798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:15.513998 containerd[1466]: time="2025-02-13T20:01:15.513941896Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 20:01:15.515185 containerd[1466]: time="2025-02-13T20:01:15.515151554Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:15.518044 containerd[1466]: time="2025-02-13T20:01:15.517990966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:15.519187 containerd[1466]: time="2025-02-13T20:01:15.519145228Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.185778763s" Feb 13 20:01:15.519187 containerd[1466]: time="2025-02-13T20:01:15.519179393Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 20:01:18.038914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:01:18.055371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:01:18.073212 systemd[1]: Reloading requested from client PID 2176 ('systemctl') (unit session-9.scope)... Feb 13 20:01:18.073231 systemd[1]: Reloading... Feb 13 20:01:18.163126 zram_generator::config[2218]: No configuration found. Feb 13 20:01:18.551210 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:01:18.634120 systemd[1]: Reloading finished in 560 ms. Feb 13 20:01:18.679290 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:01:18.679387 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:01:18.679681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:01:18.681460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:01:18.837576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:01:18.842237 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:01:18.880413 kubelet[2263]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:01:18.880413 kubelet[2263]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:01:18.880413 kubelet[2263]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:01:18.881467 kubelet[2263]: I0213 20:01:18.881424 2263 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:01:19.238390 kubelet[2263]: I0213 20:01:19.238257 2263 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:01:19.238390 kubelet[2263]: I0213 20:01:19.238295 2263 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:01:19.238537 kubelet[2263]: I0213 20:01:19.238522 2263 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:01:19.255537 kubelet[2263]: E0213 20:01:19.255495 2263 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:19.255699 kubelet[2263]: I0213 20:01:19.255657 2263 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:01:19.270920 kubelet[2263]: I0213 20:01:19.270873 2263 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:01:19.272392 kubelet[2263]: I0213 20:01:19.272341 2263 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:01:19.272615 kubelet[2263]: I0213 20:01:19.272389 2263 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:01:19.273257 kubelet[2263]: I0213 20:01:19.273232 2263 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:01:19.273257 kubelet[2263]: I0213 20:01:19.273255 2263 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:01:19.273452 kubelet[2263]: I0213 20:01:19.273428 2263 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:01:19.274241 kubelet[2263]: I0213 20:01:19.274217 2263 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:01:19.274273 kubelet[2263]: I0213 20:01:19.274251 2263 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:01:19.274301 kubelet[2263]: I0213 20:01:19.274279 2263 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:01:19.274326 kubelet[2263]: I0213 20:01:19.274311 2263 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:01:19.274826 kubelet[2263]: W0213 20:01:19.274771 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:19.274872 kubelet[2263]: E0213 20:01:19.274831 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:19.276544 kubelet[2263]: W0213 20:01:19.276495 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:19.276544 kubelet[2263]: E0213 20:01:19.276535 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:19.278893 kubelet[2263]: I0213 20:01:19.278870 2263 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:01:19.280427 kubelet[2263]: I0213 20:01:19.280392 2263 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:01:19.280469 kubelet[2263]: W0213 20:01:19.280458 2263 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:01:19.281415 kubelet[2263]: I0213 20:01:19.281301 2263 server.go:1264] "Started kubelet" Feb 13 20:01:19.281415 kubelet[2263]: I0213 20:01:19.281375 2263 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:01:19.281415 kubelet[2263]: I0213 20:01:19.281376 2263 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:01:19.281930 kubelet[2263]: I0213 20:01:19.281748 2263 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:01:19.282411 kubelet[2263]: I0213 20:01:19.282387 2263 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:01:19.283756 kubelet[2263]: I0213 20:01:19.283454 2263 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:01:19.285562 kubelet[2263]: E0213 20:01:19.285208 2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:01:19.285562 kubelet[2263]: I0213 20:01:19.285259 2263 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:01:19.285562 kubelet[2263]: I0213 20:01:19.285370 2263 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:01:19.285562 kubelet[2263]: I0213 20:01:19.285426 2263 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:01:19.285681 kubelet[2263]: E0213 20:01:19.285553 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" Feb 13 20:01:19.286914 kubelet[2263]: W0213 20:01:19.286347 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:19.286914 kubelet[2263]: E0213 20:01:19.286395 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:19.286914 kubelet[2263]: I0213 20:01:19.286748 2263 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:01:19.286914 kubelet[2263]: I0213 20:01:19.286826 2263 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:01:19.287817 kubelet[2263]: E0213 20:01:19.287434 2263 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:01:19.288037 kubelet[2263]: I0213 20:01:19.288016 2263 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:01:19.290190 kubelet[2263]: E0213 20:01:19.289789 2263 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dd016c90d833 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:01:19.281281075 +0000 UTC m=+0.435063207,LastTimestamp:2025-02-13 20:01:19.281281075 +0000 UTC m=+0.435063207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:01:19.305321 kubelet[2263]: I0213 20:01:19.305275 2263 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:01:19.305321 kubelet[2263]: I0213 20:01:19.305304 2263 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:01:19.305321 kubelet[2263]: I0213 20:01:19.305296 2263 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:01:19.305321 kubelet[2263]: I0213 20:01:19.305324 2263 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:01:19.306710 kubelet[2263]: I0213 20:01:19.306682 2263 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:01:19.307030 kubelet[2263]: I0213 20:01:19.306715 2263 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:01:19.307030 kubelet[2263]: I0213 20:01:19.306737 2263 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:01:19.307030 kubelet[2263]: E0213 20:01:19.306777 2263 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:01:19.307389 kubelet[2263]: W0213 20:01:19.307340 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:19.307420 kubelet[2263]: E0213 20:01:19.307391 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:19.387468 kubelet[2263]: I0213 20:01:19.387421 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:01:19.387820 kubelet[2263]: E0213 20:01:19.387771 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Feb 13 20:01:19.406938 kubelet[2263]: E0213 20:01:19.406865 2263 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:01:19.486697 kubelet[2263]: E0213 20:01:19.486629 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" Feb 13 20:01:19.589609 kubelet[2263]: I0213 20:01:19.589421 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:01:19.589864 kubelet[2263]: E0213 20:01:19.589831 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Feb 13 20:01:19.608038 kubelet[2263]: E0213 20:01:19.607978 2263 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:01:19.888177 kubelet[2263]: E0213 20:01:19.887964 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" Feb 13 20:01:19.991749 kubelet[2263]: I0213 20:01:19.991703 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:01:19.992201 kubelet[2263]: E0213 20:01:19.992133 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Feb 13 20:01:20.008193 kubelet[2263]: E0213 20:01:20.008143 2263 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:01:20.012205 kubelet[2263]: I0213 20:01:20.012162 2263 policy_none.go:49] "None policy: Start" Feb 13 20:01:20.012861 kubelet[2263]: I0213 20:01:20.012830 2263 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:01:20.012861 kubelet[2263]: I0213 20:01:20.012874 2263 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:01:20.042301 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:01:20.060466 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:01:20.063651 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:01:20.086235 kubelet[2263]: I0213 20:01:20.086160 2263 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:01:20.086662 kubelet[2263]: I0213 20:01:20.086372 2263 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:01:20.086662 kubelet[2263]: I0213 20:01:20.086476 2263 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:01:20.087738 kubelet[2263]: E0213 20:01:20.087687 2263 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:01:20.293280 kubelet[2263]: W0213 20:01:20.293077 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:20.293280 kubelet[2263]: E0213 20:01:20.293194 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:20.427447 kubelet[2263]: W0213 20:01:20.427378 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:20.427447 kubelet[2263]: E0213 20:01:20.427434 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:20.630844 kubelet[2263]: W0213 20:01:20.630707 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:20.630844 kubelet[2263]: E0213 20:01:20.630767 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:20.689565 kubelet[2263]: E0213 20:01:20.689518 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="1.6s" Feb 13 20:01:20.771816 kubelet[2263]: W0213 20:01:20.771742 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:20.771816 kubelet[2263]: E0213 20:01:20.771813 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:20.794206 kubelet[2263]: I0213 20:01:20.794168 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:01:20.794587 kubelet[2263]: E0213 20:01:20.794543 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Feb 13 20:01:20.808814 kubelet[2263]: I0213 20:01:20.808726 2263 topology_manager.go:215] "Topology Admit Handler" podUID="3d1a34718c2e8914454ead49edf73113" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:01:20.810142 kubelet[2263]: I0213 20:01:20.810085 2263 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:01:20.811392 kubelet[2263]: I0213 20:01:20.811332 2263 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:01:20.816516 systemd[1]: Created slice kubepods-burstable-pod3d1a34718c2e8914454ead49edf73113.slice - libcontainer container kubepods-burstable-pod3d1a34718c2e8914454ead49edf73113.slice. Feb 13 20:01:20.843624 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 20:01:20.857921 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 20:01:20.893060 kubelet[2263]: I0213 20:01:20.892949 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:01:20.893060 kubelet[2263]: I0213 20:01:20.892989 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d1a34718c2e8914454ead49edf73113-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3d1a34718c2e8914454ead49edf73113\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:01:20.893060 kubelet[2263]: I0213 20:01:20.893007 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:20.893060 kubelet[2263]: I0213 20:01:20.893026 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:20.893060 kubelet[2263]: I0213 20:01:20.893039 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d1a34718c2e8914454ead49edf73113-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3d1a34718c2e8914454ead49edf73113\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:01:20.893529 kubelet[2263]: I0213 20:01:20.893053 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d1a34718c2e8914454ead49edf73113-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3d1a34718c2e8914454ead49edf73113\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:01:20.893529 kubelet[2263]: I0213 20:01:20.893148 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:20.893529 kubelet[2263]: I0213 20:01:20.893205 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:20.893529 kubelet[2263]: I0213 20:01:20.893251 2263 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:21.141534 kubelet[2263]: E0213 20:01:21.141473 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:21.142284 containerd[1466]: time="2025-02-13T20:01:21.142238363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3d1a34718c2e8914454ead49edf73113,Namespace:kube-system,Attempt:0,}" Feb 13 20:01:21.156612 kubelet[2263]: E0213 20:01:21.156512 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:21.157085 containerd[1466]: time="2025-02-13T20:01:21.157018320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 20:01:21.160461 kubelet[2263]: E0213 20:01:21.160428 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:21.160965 containerd[1466]: time="2025-02-13T20:01:21.160935104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 20:01:21.408143 kubelet[2263]: E0213 20:01:21.407999 2263 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:22.290587 kubelet[2263]: E0213 20:01:22.290533 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="3.2s" Feb 13 20:01:22.374297 kubelet[2263]: W0213 20:01:22.374250 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:22.374297 kubelet[2263]: E0213 20:01:22.374295 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:22.396404 kubelet[2263]: I0213 20:01:22.396377 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:01:22.396670 kubelet[2263]: E0213 20:01:22.396651 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Feb 13 20:01:22.483398 kubelet[2263]: W0213 20:01:22.483358 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:22.483398 kubelet[2263]: E0213 20:01:22.483398 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:22.679954 kubelet[2263]: E0213 20:01:22.679779 2263 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dd016c90d833 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 20:01:19.281281075 +0000 UTC m=+0.435063207,LastTimestamp:2025-02-13 20:01:19.281281075 +0000 UTC m=+0.435063207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 20:01:22.786025 kubelet[2263]: W0213 20:01:22.785980 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:22.786025 kubelet[2263]: E0213 20:01:22.786038 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:23.045700 kubelet[2263]: W0213 20:01:23.045591 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:23.045700 kubelet[2263]: E0213 20:01:23.045630 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:24.162062 update_engine[1456]: I20250213 20:01:24.161984 1456 update_attempter.cc:509] Updating boot flags... Feb 13 20:01:24.200359 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2302) Feb 13 20:01:24.232165 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2305) Feb 13 20:01:24.443895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942161610.mount: Deactivated successfully. Feb 13 20:01:24.517492 containerd[1466]: time="2025-02-13T20:01:24.517447060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:01:24.544179 containerd[1466]: time="2025-02-13T20:01:24.544136569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:01:24.559340 containerd[1466]: time="2025-02-13T20:01:24.559300091Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:01:24.568805 containerd[1466]: time="2025-02-13T20:01:24.568774695Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:01:24.581729 containerd[1466]: time="2025-02-13T20:01:24.581690630Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:01:24.593065 containerd[1466]: time="2025-02-13T20:01:24.593029173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:01:24.606484 containerd[1466]: time="2025-02-13T20:01:24.606441771Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:01:24.619982 containerd[1466]: time="2025-02-13T20:01:24.619938188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:01:24.620664 containerd[1466]: time="2025-02-13T20:01:24.620640932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.459630345s" Feb 13 20:01:24.621265 containerd[1466]: time="2025-02-13T20:01:24.621239078Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.464120295s" Feb 13 20:01:24.649702 containerd[1466]: time="2025-02-13T20:01:24.649661807Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 3.507347468s" Feb 13 20:01:24.920872 containerd[1466]: time="2025-02-13T20:01:24.920462242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:01:24.920872 containerd[1466]: time="2025-02-13T20:01:24.920526775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:01:24.920872 containerd[1466]: time="2025-02-13T20:01:24.920546743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:24.920872 containerd[1466]: time="2025-02-13T20:01:24.920650590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:24.921692 containerd[1466]: time="2025-02-13T20:01:24.921418397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:01:24.921692 containerd[1466]: time="2025-02-13T20:01:24.921490985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:01:24.921692 containerd[1466]: time="2025-02-13T20:01:24.921505633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:24.921692 containerd[1466]: time="2025-02-13T20:01:24.921576748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:24.931620 containerd[1466]: time="2025-02-13T20:01:24.930812108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:01:24.931620 containerd[1466]: time="2025-02-13T20:01:24.931443626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:01:24.931620 containerd[1466]: time="2025-02-13T20:01:24.931463053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:24.931908 containerd[1466]: time="2025-02-13T20:01:24.931841822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:24.944263 systemd[1]: Started cri-containerd-499a61a9f7c6d22dbae953655b108b8ef857809d10608e025d84d3d9c2acad84.scope - libcontainer container 499a61a9f7c6d22dbae953655b108b8ef857809d10608e025d84d3d9c2acad84. Feb 13 20:01:24.948440 systemd[1]: Started cri-containerd-257cadd289ce163300fa412855a776102f467883b216aa7bea52f4e7c95790bd.scope - libcontainer container 257cadd289ce163300fa412855a776102f467883b216aa7bea52f4e7c95790bd. Feb 13 20:01:24.950334 systemd[1]: Started cri-containerd-8fa40006a4738c452c8bdd2d4521ec734840a95a3c3b36772625dff658735415.scope - libcontainer container 8fa40006a4738c452c8bdd2d4521ec734840a95a3c3b36772625dff658735415. Feb 13 20:01:24.987806 containerd[1466]: time="2025-02-13T20:01:24.987452958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"499a61a9f7c6d22dbae953655b108b8ef857809d10608e025d84d3d9c2acad84\"" Feb 13 20:01:24.990290 kubelet[2263]: E0213 20:01:24.990253 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:24.994223 containerd[1466]: time="2025-02-13T20:01:24.994193202Z" level=info msg="CreateContainer within sandbox \"499a61a9f7c6d22dbae953655b108b8ef857809d10608e025d84d3d9c2acad84\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:01:24.995062 containerd[1466]: time="2025-02-13T20:01:24.995038546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fa40006a4738c452c8bdd2d4521ec734840a95a3c3b36772625dff658735415\"" Feb 13 20:01:24.996203 kubelet[2263]: E0213 20:01:24.996172 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:25.000863 containerd[1466]: time="2025-02-13T20:01:25.000834627Z" level=info msg="CreateContainer within sandbox \"8fa40006a4738c452c8bdd2d4521ec734840a95a3c3b36772625dff658735415\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:01:25.001126 containerd[1466]: time="2025-02-13T20:01:25.001056559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3d1a34718c2e8914454ead49edf73113,Namespace:kube-system,Attempt:0,} returns sandbox id \"257cadd289ce163300fa412855a776102f467883b216aa7bea52f4e7c95790bd\"" Feb 13 20:01:25.001640 kubelet[2263]: E0213 20:01:25.001614 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:25.003545 containerd[1466]: time="2025-02-13T20:01:25.003513920Z" level=info msg="CreateContainer within sandbox \"257cadd289ce163300fa412855a776102f467883b216aa7bea52f4e7c95790bd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:01:25.491320 kubelet[2263]: E0213 20:01:25.491257 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="6.4s" Feb 13 20:01:25.496988 containerd[1466]: time="2025-02-13T20:01:25.496946572Z" level=info msg="CreateContainer within sandbox \"8fa40006a4738c452c8bdd2d4521ec734840a95a3c3b36772625dff658735415\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bcdab140831f8dafa6ce39e11ef5fcd2d17c1bb11f058e752d5a82ce874d6d32\"" Feb 13 20:01:25.497660 containerd[1466]: time="2025-02-13T20:01:25.497457931Z" level=info msg="StartContainer for \"bcdab140831f8dafa6ce39e11ef5fcd2d17c1bb11f058e752d5a82ce874d6d32\"" Feb 13 20:01:25.532266 systemd[1]: Started cri-containerd-bcdab140831f8dafa6ce39e11ef5fcd2d17c1bb11f058e752d5a82ce874d6d32.scope - libcontainer container bcdab140831f8dafa6ce39e11ef5fcd2d17c1bb11f058e752d5a82ce874d6d32. Feb 13 20:01:25.541336 containerd[1466]: time="2025-02-13T20:01:25.541292340Z" level=info msg="CreateContainer within sandbox \"499a61a9f7c6d22dbae953655b108b8ef857809d10608e025d84d3d9c2acad84\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"18d793ca163cd3a70c4b2351718bf395b6bad46eb2e888e663f5df6be962b958\"" Feb 13 20:01:25.541710 containerd[1466]: time="2025-02-13T20:01:25.541690626Z" level=info msg="StartContainer for \"18d793ca163cd3a70c4b2351718bf395b6bad46eb2e888e663f5df6be962b958\"" Feb 13 20:01:25.571344 systemd[1]: Started cri-containerd-18d793ca163cd3a70c4b2351718bf395b6bad46eb2e888e663f5df6be962b958.scope - libcontainer container 18d793ca163cd3a70c4b2351718bf395b6bad46eb2e888e663f5df6be962b958. Feb 13 20:01:25.598262 kubelet[2263]: I0213 20:01:25.598227 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:01:25.598544 kubelet[2263]: E0213 20:01:25.598518 2263 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Feb 13 20:01:25.669837 containerd[1466]: time="2025-02-13T20:01:25.669778394Z" level=info msg="CreateContainer within sandbox \"257cadd289ce163300fa412855a776102f467883b216aa7bea52f4e7c95790bd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"40bb0c35280f46e31911fcab4fb4d98b74089cb03a70836c02bacc4a4427091a\"" Feb 13 20:01:25.669974 containerd[1466]: time="2025-02-13T20:01:25.669915514Z" level=info msg="StartContainer for \"18d793ca163cd3a70c4b2351718bf395b6bad46eb2e888e663f5df6be962b958\" returns successfully" Feb 13 20:01:25.669974 containerd[1466]: time="2025-02-13T20:01:25.669964156Z" level=info msg="StartContainer for \"bcdab140831f8dafa6ce39e11ef5fcd2d17c1bb11f058e752d5a82ce874d6d32\" returns successfully" Feb 13 20:01:25.670567 containerd[1466]: time="2025-02-13T20:01:25.670517646Z" level=info msg="StartContainer for \"40bb0c35280f46e31911fcab4fb4d98b74089cb03a70836c02bacc4a4427091a\"" Feb 13 20:01:25.699225 systemd[1]: Started cri-containerd-40bb0c35280f46e31911fcab4fb4d98b74089cb03a70836c02bacc4a4427091a.scope - libcontainer container 40bb0c35280f46e31911fcab4fb4d98b74089cb03a70836c02bacc4a4427091a. Feb 13 20:01:25.730625 kubelet[2263]: W0213 20:01:25.730546 2263 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:25.730709 kubelet[2263]: E0213 20:01:25.730628 2263 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:25.739148 kubelet[2263]: E0213 20:01:25.739123 2263 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.119:6443: connect: connection refused Feb 13 20:01:25.781912 containerd[1466]: time="2025-02-13T20:01:25.781869442Z" level=info msg="StartContainer for \"40bb0c35280f46e31911fcab4fb4d98b74089cb03a70836c02bacc4a4427091a\" returns successfully" Feb 13 20:01:26.321391 kubelet[2263]: E0213 20:01:26.321354 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:26.323554 kubelet[2263]: E0213 20:01:26.323532 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:26.327251 kubelet[2263]: E0213 20:01:26.327227 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:27.026279 kubelet[2263]: E0213 20:01:27.026239 2263 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 20:01:27.329616 kubelet[2263]: E0213 20:01:27.329491 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:27.329616 kubelet[2263]: E0213 20:01:27.329614 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:27.330274 kubelet[2263]: E0213 20:01:27.330256 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:27.592272 kubelet[2263]: E0213 20:01:27.592155 2263 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 20:01:28.094231 kubelet[2263]: E0213 20:01:28.094195 2263 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 20:01:28.331508 kubelet[2263]: E0213 20:01:28.331467 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:28.331508 kubelet[2263]: E0213 20:01:28.331469 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:28.331894 kubelet[2263]: E0213 20:01:28.331649 2263 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:29.063367 kubelet[2263]: E0213 20:01:29.063324 2263 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 20:01:30.087824 kubelet[2263]: E0213 20:01:30.087786 2263 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 20:01:31.894690 kubelet[2263]: E0213 20:01:31.894644 2263 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 20:01:32.000222 kubelet[2263]: I0213 20:01:32.000184 2263 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:01:32.007865 kubelet[2263]: I0213 20:01:32.007834 2263 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:01:32.013702 kubelet[2263]: E0213 20:01:32.013678 2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:01:32.114445 kubelet[2263]: E0213 20:01:32.114397 2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:01:32.215252 kubelet[2263]: E0213 20:01:32.215091 2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:01:32.316021 kubelet[2263]: E0213 20:01:32.315995 2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:01:32.416523 kubelet[2263]: E0213 20:01:32.416483 2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:01:32.432406 systemd[1]: Reloading requested from client PID 2555 ('systemctl') (unit session-9.scope)... Feb 13 20:01:32.432423 systemd[1]: Reloading... Feb 13 20:01:32.507124 zram_generator::config[2600]: No configuration found. Feb 13 20:01:32.517398 kubelet[2263]: E0213 20:01:32.517355 2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:01:32.608489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:01:32.617681 kubelet[2263]: E0213 20:01:32.617641 2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:01:32.699792 systemd[1]: Reloading finished in 266 ms. Feb 13 20:01:32.717923 kubelet[2263]: E0213 20:01:32.717884 2263 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 20:01:32.752953 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:01:32.773651 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:01:32.773962 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:01:32.785306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:01:32.931905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:01:32.937163 (kubelet)[2639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:01:32.979390 kubelet[2639]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:01:32.979390 kubelet[2639]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:01:32.979390 kubelet[2639]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:01:32.979787 kubelet[2639]: I0213 20:01:32.979425 2639 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:01:32.985264 kubelet[2639]: I0213 20:01:32.985234 2639 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:01:32.985264 kubelet[2639]: I0213 20:01:32.985262 2639 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:01:32.985500 kubelet[2639]: I0213 20:01:32.985461 2639 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:01:32.986675 kubelet[2639]: I0213 20:01:32.986655 2639 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:01:32.988247 kubelet[2639]: I0213 20:01:32.987880 2639 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:01:32.998443 kubelet[2639]: I0213 20:01:32.998403 2639 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:01:32.998716 kubelet[2639]: I0213 20:01:32.998674 2639 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:01:32.998963 kubelet[2639]: I0213 20:01:32.998715 2639 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:01:32.999073 kubelet[2639]: I0213 20:01:32.998972 2639 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:01:32.999073 kubelet[2639]: I0213 20:01:32.998983 2639 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:01:32.999073 kubelet[2639]: I0213 20:01:32.999039 2639 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:01:32.999215 kubelet[2639]: I0213 20:01:32.999193 2639 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:01:32.999215 kubelet[2639]: I0213 20:01:32.999209 2639 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:01:32.999284 kubelet[2639]: I0213 20:01:32.999230 2639 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:01:32.999284 kubelet[2639]: I0213 20:01:32.999250 2639 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:01:33.004133 kubelet[2639]: I0213 20:01:33.003461 2639 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:01:33.004133 kubelet[2639]: I0213 20:01:33.003656 2639 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:01:33.004133 kubelet[2639]: I0213 20:01:33.004029 2639 server.go:1264] "Started kubelet" Feb 13 20:01:33.005043 kubelet[2639]: I0213 20:01:33.004964 2639 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:01:33.005184 kubelet[2639]: I0213 20:01:33.005142 2639 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:01:33.005439 kubelet[2639]: I0213 20:01:33.005410 2639 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:01:33.006734 kubelet[2639]: I0213 20:01:33.006709 2639 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:01:33.007173 kubelet[2639]: I0213 20:01:33.006985 2639 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:01:33.010833 kubelet[2639]: E0213 20:01:33.010798 2639 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:01:33.011531 kubelet[2639]: I0213 20:01:33.011196 2639 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:01:33.011531 kubelet[2639]: I0213 20:01:33.011292 2639 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:01:33.011531 kubelet[2639]: I0213 20:01:33.011430 2639 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:01:33.012780 kubelet[2639]: I0213 20:01:33.012745 2639 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:01:33.012942 kubelet[2639]: I0213 20:01:33.012908 2639 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:01:33.014129 kubelet[2639]: I0213 20:01:33.014072 2639 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:01:33.020368 kubelet[2639]: I0213 20:01:33.020322 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:01:33.021979 kubelet[2639]: I0213 20:01:33.021944 2639 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:01:33.022036 kubelet[2639]: I0213 20:01:33.021987 2639 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:01:33.022036 kubelet[2639]: I0213 20:01:33.022005 2639 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:01:33.022134 kubelet[2639]: E0213 20:01:33.022053 2639 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:01:33.052492 kubelet[2639]: I0213 20:01:33.052395 2639 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:01:33.052922 kubelet[2639]: I0213 20:01:33.052707 2639 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:01:33.052922 kubelet[2639]: I0213 20:01:33.052729 2639 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:01:33.052922 kubelet[2639]: I0213 20:01:33.052865 2639 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:01:33.052922 kubelet[2639]: I0213 20:01:33.052875 2639 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:01:33.052922 kubelet[2639]: I0213 20:01:33.052892 2639 policy_none.go:49] "None policy: Start" Feb 13 20:01:33.053795 kubelet[2639]: I0213 20:01:33.053748 2639 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:01:33.053869 kubelet[2639]: I0213 20:01:33.053859 2639 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:01:33.054037 kubelet[2639]: I0213 20:01:33.054025 2639 state_mem.go:75] "Updated machine memory state" Feb 13 20:01:33.058286 kubelet[2639]: I0213 20:01:33.058271 2639 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:01:33.058649 kubelet[2639]: I0213 20:01:33.058618 2639 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:01:33.060183 kubelet[2639]: I0213 20:01:33.060063 2639 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:01:33.122968 kubelet[2639]: I0213 20:01:33.122910 2639 topology_manager.go:215] "Topology Admit Handler" podUID="3d1a34718c2e8914454ead49edf73113" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 20:01:33.123092 kubelet[2639]: I0213 20:01:33.123007 2639 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 20:01:33.123092 kubelet[2639]: I0213 20:01:33.123071 2639 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 20:01:33.167494 kubelet[2639]: I0213 20:01:33.167442 2639 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 20:01:33.172913 kubelet[2639]: I0213 20:01:33.172885 2639 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 20:01:33.172985 kubelet[2639]: I0213 20:01:33.172968 2639 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 20:01:33.312565 kubelet[2639]: I0213 20:01:33.312432 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d1a34718c2e8914454ead49edf73113-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3d1a34718c2e8914454ead49edf73113\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:01:33.312565 kubelet[2639]: I0213 20:01:33.312465 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:33.312565 kubelet[2639]: I0213 20:01:33.312485 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:33.312565 kubelet[2639]: I0213 20:01:33.312514 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:33.312565 kubelet[2639]: I0213 20:01:33.312529 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 20:01:33.312818 kubelet[2639]: I0213 20:01:33.312542 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d1a34718c2e8914454ead49edf73113-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3d1a34718c2e8914454ead49edf73113\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:01:33.312818 kubelet[2639]: I0213 20:01:33.312556 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d1a34718c2e8914454ead49edf73113-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3d1a34718c2e8914454ead49edf73113\") " pod="kube-system/kube-apiserver-localhost" Feb 13 20:01:33.312818 kubelet[2639]: I0213 20:01:33.312572 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:33.312818 kubelet[2639]: I0213 20:01:33.312588 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:33.433969 kubelet[2639]: E0213 20:01:33.433912 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:33.434429 kubelet[2639]: E0213 20:01:33.434406 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:33.434731 kubelet[2639]: E0213 20:01:33.434677 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:34.000596 kubelet[2639]: I0213 20:01:34.000557 2639 apiserver.go:52] "Watching apiserver" Feb 13 20:01:34.011730 kubelet[2639]: I0213 20:01:34.011694 2639 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:01:34.038121 kubelet[2639]: E0213 20:01:34.036982 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:34.048178 kubelet[2639]: E0213 20:01:34.048137 2639 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 20:01:34.048571 kubelet[2639]: E0213 20:01:34.048552 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:34.048651 kubelet[2639]: E0213 20:01:34.048636 2639 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 20:01:34.048977 kubelet[2639]: E0213 20:01:34.048960 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:34.060058 kubelet[2639]: I0213 20:01:34.059956 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.05994053 podStartE2EDuration="1.05994053s" podCreationTimestamp="2025-02-13 20:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:01:34.058386397 +0000 UTC m=+1.117123308" watchObservedRunningTime="2025-02-13 20:01:34.05994053 +0000 UTC m=+1.118677431" Feb 13 20:01:34.079643 kubelet[2639]: I0213 20:01:34.079549 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.079532232 podStartE2EDuration="1.079532232s" podCreationTimestamp="2025-02-13 20:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:01:34.072019094 +0000 UTC m=+1.130756005" watchObservedRunningTime="2025-02-13 20:01:34.079532232 +0000 UTC m=+1.138269133" Feb 13 20:01:34.097519 kubelet[2639]: I0213 20:01:34.097450 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.097432832 podStartE2EDuration="1.097432832s" podCreationTimestamp="2025-02-13 20:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:01:34.080029801 +0000 UTC m=+1.138766712" watchObservedRunningTime="2025-02-13 20:01:34.097432832 +0000 UTC m=+1.156169743" Feb 13 20:01:35.037424 kubelet[2639]: E0213 20:01:35.037385 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:35.037822 kubelet[2639]: E0213 20:01:35.037493 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:36.836201 sudo[1666]: pam_unix(sudo:session): session closed for user root Feb 13 20:01:36.838252 sshd[1661]: pam_unix(sshd:session): session closed for user core Feb 13 20:01:36.842758 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:58638.service: Deactivated successfully. Feb 13 20:01:36.845132 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:01:36.845371 systemd[1]: session-9.scope: Consumed 4.565s CPU time, 192.6M memory peak, 0B memory swap peak. Feb 13 20:01:36.845985 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:01:36.846797 systemd-logind[1454]: Removed session 9. Feb 13 20:01:38.178120 kubelet[2639]: E0213 20:01:38.178040 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:39.002139 kubelet[2639]: E0213 20:01:39.002063 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:39.042804 kubelet[2639]: E0213 20:01:39.042769 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:39.042804 kubelet[2639]: E0213 20:01:39.042822 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:41.577667 kubelet[2639]: E0213 20:01:41.577628 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:42.046880 kubelet[2639]: E0213 20:01:42.046842 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:44.114142 kubelet[2639]: I0213 20:01:44.114080 2639 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:01:44.114734 containerd[1466]: time="2025-02-13T20:01:44.114700629Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:01:44.115064 kubelet[2639]: I0213 20:01:44.114972 2639 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:01:44.938863 kubelet[2639]: I0213 20:01:44.938045 2639 topology_manager.go:215] "Topology Admit Handler" podUID="f720fd61-fb6b-4295-8b65-197de0163d71" podNamespace="kube-system" podName="kube-proxy-54zlv" Feb 13 20:01:44.947456 systemd[1]: Created slice kubepods-besteffort-podf720fd61_fb6b_4295_8b65_197de0163d71.slice - libcontainer container kubepods-besteffort-podf720fd61_fb6b_4295_8b65_197de0163d71.slice. Feb 13 20:01:44.988995 kubelet[2639]: I0213 20:01:44.988933 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f720fd61-fb6b-4295-8b65-197de0163d71-kube-proxy\") pod \"kube-proxy-54zlv\" (UID: \"f720fd61-fb6b-4295-8b65-197de0163d71\") " pod="kube-system/kube-proxy-54zlv" Feb 13 20:01:44.988995 kubelet[2639]: I0213 20:01:44.988983 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f720fd61-fb6b-4295-8b65-197de0163d71-lib-modules\") pod \"kube-proxy-54zlv\" (UID: \"f720fd61-fb6b-4295-8b65-197de0163d71\") " pod="kube-system/kube-proxy-54zlv" Feb 13 20:01:44.989193 kubelet[2639]: I0213 20:01:44.989007 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft4zs\" (UniqueName: \"kubernetes.io/projected/f720fd61-fb6b-4295-8b65-197de0163d71-kube-api-access-ft4zs\") pod \"kube-proxy-54zlv\" (UID: \"f720fd61-fb6b-4295-8b65-197de0163d71\") " pod="kube-system/kube-proxy-54zlv" Feb 13 20:01:44.989193 kubelet[2639]: I0213 20:01:44.989029 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f720fd61-fb6b-4295-8b65-197de0163d71-xtables-lock\") pod \"kube-proxy-54zlv\" (UID: \"f720fd61-fb6b-4295-8b65-197de0163d71\") " pod="kube-system/kube-proxy-54zlv" Feb 13 20:01:45.099946 kubelet[2639]: I0213 20:01:45.099907 2639 topology_manager.go:215] "Topology Admit Handler" podUID="a7883467-51ba-49c1-98a2-a02c69523a6e" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-mx5cv" Feb 13 20:01:45.112617 systemd[1]: Created slice kubepods-besteffort-poda7883467_51ba_49c1_98a2_a02c69523a6e.slice - libcontainer container kubepods-besteffort-poda7883467_51ba_49c1_98a2_a02c69523a6e.slice. Feb 13 20:01:45.190260 kubelet[2639]: I0213 20:01:45.190116 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bll5n\" (UniqueName: \"kubernetes.io/projected/a7883467-51ba-49c1-98a2-a02c69523a6e-kube-api-access-bll5n\") pod \"tigera-operator-7bc55997bb-mx5cv\" (UID: \"a7883467-51ba-49c1-98a2-a02c69523a6e\") " pod="tigera-operator/tigera-operator-7bc55997bb-mx5cv" Feb 13 20:01:45.190260 kubelet[2639]: I0213 20:01:45.190156 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a7883467-51ba-49c1-98a2-a02c69523a6e-var-lib-calico\") pod \"tigera-operator-7bc55997bb-mx5cv\" (UID: \"a7883467-51ba-49c1-98a2-a02c69523a6e\") " pod="tigera-operator/tigera-operator-7bc55997bb-mx5cv" Feb 13 20:01:45.260988 kubelet[2639]: E0213 20:01:45.260956 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:45.261729 containerd[1466]: time="2025-02-13T20:01:45.261506713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-54zlv,Uid:f720fd61-fb6b-4295-8b65-197de0163d71,Namespace:kube-system,Attempt:0,}" Feb 13 20:01:45.415955 containerd[1466]: time="2025-02-13T20:01:45.415908747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-mx5cv,Uid:a7883467-51ba-49c1-98a2-a02c69523a6e,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:01:45.458602 containerd[1466]: time="2025-02-13T20:01:45.458444284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:01:45.458602 containerd[1466]: time="2025-02-13T20:01:45.458503546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:01:45.458602 containerd[1466]: time="2025-02-13T20:01:45.458518273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:45.458743 containerd[1466]: time="2025-02-13T20:01:45.458599145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:45.472023 containerd[1466]: time="2025-02-13T20:01:45.471771382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:01:45.472023 containerd[1466]: time="2025-02-13T20:01:45.471835201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:01:45.472023 containerd[1466]: time="2025-02-13T20:01:45.471850280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:45.472198 containerd[1466]: time="2025-02-13T20:01:45.472060376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:45.483314 systemd[1]: Started cri-containerd-2561f78b6eccee28bc5a3f717ea6eb22f08d2da62aba70273b458bfad6a84a26.scope - libcontainer container 2561f78b6eccee28bc5a3f717ea6eb22f08d2da62aba70273b458bfad6a84a26. Feb 13 20:01:45.486352 systemd[1]: Started cri-containerd-e81c89bf98a29de7b97b3afed6023935a48879f4a107432e31604c6f8f4b7730.scope - libcontainer container e81c89bf98a29de7b97b3afed6023935a48879f4a107432e31604c6f8f4b7730. Feb 13 20:01:45.505129 containerd[1466]: time="2025-02-13T20:01:45.504862074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-54zlv,Uid:f720fd61-fb6b-4295-8b65-197de0163d71,Namespace:kube-system,Attempt:0,} returns sandbox id \"2561f78b6eccee28bc5a3f717ea6eb22f08d2da62aba70273b458bfad6a84a26\"" Feb 13 20:01:45.506963 kubelet[2639]: E0213 20:01:45.505912 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:45.509320 containerd[1466]: time="2025-02-13T20:01:45.509285755Z" level=info msg="CreateContainer within sandbox \"2561f78b6eccee28bc5a3f717ea6eb22f08d2da62aba70273b458bfad6a84a26\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:01:45.523633 containerd[1466]: time="2025-02-13T20:01:45.523590160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-mx5cv,Uid:a7883467-51ba-49c1-98a2-a02c69523a6e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e81c89bf98a29de7b97b3afed6023935a48879f4a107432e31604c6f8f4b7730\"" Feb 13 20:01:45.525841 containerd[1466]: time="2025-02-13T20:01:45.525818972Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:01:45.530789 containerd[1466]: time="2025-02-13T20:01:45.530757662Z" level=info msg="CreateContainer within sandbox \"2561f78b6eccee28bc5a3f717ea6eb22f08d2da62aba70273b458bfad6a84a26\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f477cda65e72a2d241523cde0452a3b5fbd994ab9f6ded7ac8aa2a1f5f0449f\"" Feb 13 20:01:45.531240 containerd[1466]: time="2025-02-13T20:01:45.531216305Z" level=info msg="StartContainer for \"3f477cda65e72a2d241523cde0452a3b5fbd994ab9f6ded7ac8aa2a1f5f0449f\"" Feb 13 20:01:45.562276 systemd[1]: Started cri-containerd-3f477cda65e72a2d241523cde0452a3b5fbd994ab9f6ded7ac8aa2a1f5f0449f.scope - libcontainer container 3f477cda65e72a2d241523cde0452a3b5fbd994ab9f6ded7ac8aa2a1f5f0449f. Feb 13 20:01:45.593719 containerd[1466]: time="2025-02-13T20:01:45.593662473Z" level=info msg="StartContainer for \"3f477cda65e72a2d241523cde0452a3b5fbd994ab9f6ded7ac8aa2a1f5f0449f\" returns successfully" Feb 13 20:01:46.054780 kubelet[2639]: E0213 20:01:46.054739 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:46.062881 kubelet[2639]: I0213 20:01:46.062831 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-54zlv" podStartSLOduration=2.062812922 podStartE2EDuration="2.062812922s" podCreationTimestamp="2025-02-13 20:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:01:46.062743542 +0000 UTC m=+13.121480463" watchObservedRunningTime="2025-02-13 20:01:46.062812922 +0000 UTC m=+13.121549833" Feb 13 20:01:50.490138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211554358.mount: Deactivated successfully. Feb 13 20:01:50.761723 containerd[1466]: time="2025-02-13T20:01:50.761608337Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:50.762442 containerd[1466]: time="2025-02-13T20:01:50.762381812Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:01:50.763432 containerd[1466]: time="2025-02-13T20:01:50.763396047Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:50.765413 containerd[1466]: time="2025-02-13T20:01:50.765374064Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:01:50.766013 containerd[1466]: time="2025-02-13T20:01:50.765983420Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.239972706s" Feb 13 20:01:50.766013 containerd[1466]: time="2025-02-13T20:01:50.766009038Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:01:50.770290 containerd[1466]: time="2025-02-13T20:01:50.770255699Z" level=info msg="CreateContainer within sandbox \"e81c89bf98a29de7b97b3afed6023935a48879f4a107432e31604c6f8f4b7730\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:01:51.046705 containerd[1466]: time="2025-02-13T20:01:51.046643276Z" level=info msg="CreateContainer within sandbox \"e81c89bf98a29de7b97b3afed6023935a48879f4a107432e31604c6f8f4b7730\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d0f8e8158445b239d947c5d4d2ce8bd3f1f37400272280504a5f5f9113094ca3\"" Feb 13 20:01:51.047128 containerd[1466]: time="2025-02-13T20:01:51.047081770Z" level=info msg="StartContainer for \"d0f8e8158445b239d947c5d4d2ce8bd3f1f37400272280504a5f5f9113094ca3\"" Feb 13 20:01:51.079230 systemd[1]: Started cri-containerd-d0f8e8158445b239d947c5d4d2ce8bd3f1f37400272280504a5f5f9113094ca3.scope - libcontainer container d0f8e8158445b239d947c5d4d2ce8bd3f1f37400272280504a5f5f9113094ca3. Feb 13 20:01:51.148016 containerd[1466]: time="2025-02-13T20:01:51.147961376Z" level=info msg="StartContainer for \"d0f8e8158445b239d947c5d4d2ce8bd3f1f37400272280504a5f5f9113094ca3\" returns successfully" Feb 13 20:01:52.079266 kubelet[2639]: I0213 20:01:52.079184 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-mx5cv" podStartSLOduration=1.835082999 podStartE2EDuration="7.079163968s" podCreationTimestamp="2025-02-13 20:01:45 +0000 UTC" firstStartedPulling="2025-02-13 20:01:45.524751735 +0000 UTC m=+12.583488646" lastFinishedPulling="2025-02-13 20:01:50.768832704 +0000 UTC m=+17.827569615" observedRunningTime="2025-02-13 20:01:52.079052619 +0000 UTC m=+19.137789530" watchObservedRunningTime="2025-02-13 20:01:52.079163968 +0000 UTC m=+19.137900879" Feb 13 20:01:53.926887 kubelet[2639]: I0213 20:01:53.926833 2639 topology_manager.go:215] "Topology Admit Handler" podUID="7b5364b4-50fd-4ce1-b857-5ce18dacc684" podNamespace="calico-system" podName="calico-typha-c9768df96-s7xhn" Feb 13 20:01:53.941269 systemd[1]: Created slice kubepods-besteffort-pod7b5364b4_50fd_4ce1_b857_5ce18dacc684.slice - libcontainer container kubepods-besteffort-pod7b5364b4_50fd_4ce1_b857_5ce18dacc684.slice. Feb 13 20:01:53.946864 kubelet[2639]: I0213 20:01:53.946776 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7b5364b4-50fd-4ce1-b857-5ce18dacc684-typha-certs\") pod \"calico-typha-c9768df96-s7xhn\" (UID: \"7b5364b4-50fd-4ce1-b857-5ce18dacc684\") " pod="calico-system/calico-typha-c9768df96-s7xhn" Feb 13 20:01:53.947424 kubelet[2639]: I0213 20:01:53.947386 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b5364b4-50fd-4ce1-b857-5ce18dacc684-tigera-ca-bundle\") pod \"calico-typha-c9768df96-s7xhn\" (UID: \"7b5364b4-50fd-4ce1-b857-5ce18dacc684\") " pod="calico-system/calico-typha-c9768df96-s7xhn" Feb 13 20:01:53.947541 kubelet[2639]: I0213 20:01:53.947519 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhtv6\" (UniqueName: \"kubernetes.io/projected/7b5364b4-50fd-4ce1-b857-5ce18dacc684-kube-api-access-lhtv6\") pod \"calico-typha-c9768df96-s7xhn\" (UID: \"7b5364b4-50fd-4ce1-b857-5ce18dacc684\") " pod="calico-system/calico-typha-c9768df96-s7xhn" Feb 13 20:01:53.998806 kubelet[2639]: I0213 20:01:53.998702 2639 topology_manager.go:215] "Topology Admit Handler" podUID="fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947" podNamespace="calico-system" podName="calico-node-prrfp" Feb 13 20:01:54.006537 systemd[1]: Created slice kubepods-besteffort-podfa9c4d8f_fd1a_4ae4_9fa1_3fae42f83947.slice - libcontainer container kubepods-besteffort-podfa9c4d8f_fd1a_4ae4_9fa1_3fae42f83947.slice. Feb 13 20:01:54.048158 kubelet[2639]: I0213 20:01:54.048045 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-xtables-lock\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048158 kubelet[2639]: I0213 20:01:54.048122 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-node-certs\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048158 kubelet[2639]: I0213 20:01:54.048145 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-var-run-calico\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048158 kubelet[2639]: I0213 20:01:54.048172 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-cni-bin-dir\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048432 kubelet[2639]: I0213 20:01:54.048194 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-flexvol-driver-host\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048432 kubelet[2639]: I0213 20:01:54.048219 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnbj\" (UniqueName: \"kubernetes.io/projected/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-kube-api-access-svnbj\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048432 kubelet[2639]: I0213 20:01:54.048253 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-var-lib-calico\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048432 kubelet[2639]: I0213 20:01:54.048288 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-cni-net-dir\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048432 kubelet[2639]: I0213 20:01:54.048312 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-cni-log-dir\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048677 kubelet[2639]: I0213 20:01:54.048333 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-lib-modules\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048677 kubelet[2639]: I0213 20:01:54.048380 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-policysync\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.048677 kubelet[2639]: I0213 20:01:54.048400 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947-tigera-ca-bundle\") pod \"calico-node-prrfp\" (UID: \"fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947\") " pod="calico-system/calico-node-prrfp" Feb 13 20:01:54.110728 kubelet[2639]: I0213 20:01:54.110674 2639 topology_manager.go:215] "Topology Admit Handler" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" podNamespace="calico-system" podName="csi-node-driver-gfdrh" Feb 13 20:01:54.111013 kubelet[2639]: E0213 20:01:54.110982 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:01:54.149036 kubelet[2639]: I0213 20:01:54.148960 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2516eb1f-4a76-4950-92c4-3225425d63a6-varrun\") pod \"csi-node-driver-gfdrh\" (UID: \"2516eb1f-4a76-4950-92c4-3225425d63a6\") " pod="calico-system/csi-node-driver-gfdrh" Feb 13 20:01:54.149224 kubelet[2639]: I0213 20:01:54.149127 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2516eb1f-4a76-4950-92c4-3225425d63a6-socket-dir\") pod \"csi-node-driver-gfdrh\" (UID: \"2516eb1f-4a76-4950-92c4-3225425d63a6\") " pod="calico-system/csi-node-driver-gfdrh" Feb 13 20:01:54.149224 kubelet[2639]: I0213 20:01:54.149170 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jds\" (UniqueName: \"kubernetes.io/projected/2516eb1f-4a76-4950-92c4-3225425d63a6-kube-api-access-v2jds\") pod \"csi-node-driver-gfdrh\" (UID: \"2516eb1f-4a76-4950-92c4-3225425d63a6\") " pod="calico-system/csi-node-driver-gfdrh" Feb 13 20:01:54.149278 kubelet[2639]: I0213 20:01:54.149234 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2516eb1f-4a76-4950-92c4-3225425d63a6-kubelet-dir\") pod \"csi-node-driver-gfdrh\" (UID: \"2516eb1f-4a76-4950-92c4-3225425d63a6\") " pod="calico-system/csi-node-driver-gfdrh" Feb 13 20:01:54.149349 kubelet[2639]: I0213 20:01:54.149326 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2516eb1f-4a76-4950-92c4-3225425d63a6-registration-dir\") pod \"csi-node-driver-gfdrh\" (UID: \"2516eb1f-4a76-4950-92c4-3225425d63a6\") " pod="calico-system/csi-node-driver-gfdrh" Feb 13 20:01:54.152219 kubelet[2639]: E0213 20:01:54.152189 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.152219 kubelet[2639]: W0213 20:01:54.152212 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.155270 kubelet[2639]: E0213 20:01:54.152236 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.156341 kubelet[2639]: E0213 20:01:54.156319 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.156420 kubelet[2639]: W0213 20:01:54.156405 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.156501 kubelet[2639]: E0213 20:01:54.156486 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.164845 kubelet[2639]: E0213 20:01:54.164823 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.165235 kubelet[2639]: W0213 20:01:54.165217 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.165949 kubelet[2639]: E0213 20:01:54.165932 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.244264 kubelet[2639]: E0213 20:01:54.244145 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:54.245158 containerd[1466]: time="2025-02-13T20:01:54.244616531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c9768df96-s7xhn,Uid:7b5364b4-50fd-4ce1-b857-5ce18dacc684,Namespace:calico-system,Attempt:0,}" Feb 13 20:01:54.250268 kubelet[2639]: E0213 20:01:54.250242 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.250268 kubelet[2639]: W0213 20:01:54.250259 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.250407 kubelet[2639]: E0213 20:01:54.250276 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.250593 kubelet[2639]: E0213 20:01:54.250557 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.250593 kubelet[2639]: W0213 20:01:54.250588 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.250662 kubelet[2639]: E0213 20:01:54.250614 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.250884 kubelet[2639]: E0213 20:01:54.250871 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.250884 kubelet[2639]: W0213 20:01:54.250879 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.250961 kubelet[2639]: E0213 20:01:54.250891 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.251150 kubelet[2639]: E0213 20:01:54.251136 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.251150 kubelet[2639]: W0213 20:01:54.251148 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.251226 kubelet[2639]: E0213 20:01:54.251162 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.251416 kubelet[2639]: E0213 20:01:54.251399 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.251416 kubelet[2639]: W0213 20:01:54.251413 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.251504 kubelet[2639]: E0213 20:01:54.251431 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.251704 kubelet[2639]: E0213 20:01:54.251657 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.251704 kubelet[2639]: W0213 20:01:54.251669 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.251704 kubelet[2639]: E0213 20:01:54.251685 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.251938 kubelet[2639]: E0213 20:01:54.251923 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.251938 kubelet[2639]: W0213 20:01:54.251935 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.252020 kubelet[2639]: E0213 20:01:54.251951 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.252426 kubelet[2639]: E0213 20:01:54.252369 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.252426 kubelet[2639]: W0213 20:01:54.252388 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.252426 kubelet[2639]: E0213 20:01:54.252403 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.253475 kubelet[2639]: E0213 20:01:54.253459 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.253475 kubelet[2639]: W0213 20:01:54.253474 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.253712 kubelet[2639]: E0213 20:01:54.253520 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.253752 kubelet[2639]: E0213 20:01:54.253725 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.253752 kubelet[2639]: W0213 20:01:54.253735 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.253812 kubelet[2639]: E0213 20:01:54.253788 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.253985 kubelet[2639]: E0213 20:01:54.253964 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.253985 kubelet[2639]: W0213 20:01:54.253980 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.254115 kubelet[2639]: E0213 20:01:54.254032 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.254217 kubelet[2639]: E0213 20:01:54.254188 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.254217 kubelet[2639]: W0213 20:01:54.254204 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.254282 kubelet[2639]: E0213 20:01:54.254233 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.254625 kubelet[2639]: E0213 20:01:54.254607 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.254678 kubelet[2639]: W0213 20:01:54.254619 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.254718 kubelet[2639]: E0213 20:01:54.254679 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.254880 kubelet[2639]: E0213 20:01:54.254863 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.254880 kubelet[2639]: W0213 20:01:54.254878 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.255033 kubelet[2639]: E0213 20:01:54.255013 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.255315 kubelet[2639]: E0213 20:01:54.255184 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.255315 kubelet[2639]: W0213 20:01:54.255198 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.255315 kubelet[2639]: E0213 20:01:54.255223 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.255763 kubelet[2639]: E0213 20:01:54.255639 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.255763 kubelet[2639]: W0213 20:01:54.255656 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.258998 kubelet[2639]: E0213 20:01:54.255883 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.259147 kubelet[2639]: E0213 20:01:54.259045 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.259147 kubelet[2639]: W0213 20:01:54.259063 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.259271 kubelet[2639]: E0213 20:01:54.259229 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.261187 kubelet[2639]: E0213 20:01:54.261166 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.261261 kubelet[2639]: W0213 20:01:54.261186 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.261409 kubelet[2639]: E0213 20:01:54.261390 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.261967 kubelet[2639]: E0213 20:01:54.261941 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.261967 kubelet[2639]: W0213 20:01:54.261956 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.262161 kubelet[2639]: E0213 20:01:54.262142 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.262456 kubelet[2639]: E0213 20:01:54.262429 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.262575 kubelet[2639]: W0213 20:01:54.262473 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.262656 kubelet[2639]: E0213 20:01:54.262635 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.265590 kubelet[2639]: E0213 20:01:54.265536 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.265590 kubelet[2639]: W0213 20:01:54.265557 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.265678 kubelet[2639]: E0213 20:01:54.265646 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.265950 kubelet[2639]: E0213 20:01:54.265915 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.265950 kubelet[2639]: W0213 20:01:54.265931 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.267089 kubelet[2639]: E0213 20:01:54.267055 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.267337 kubelet[2639]: E0213 20:01:54.267227 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.267337 kubelet[2639]: W0213 20:01:54.267240 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.267509 kubelet[2639]: E0213 20:01:54.267394 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.268143 kubelet[2639]: E0213 20:01:54.268115 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.268143 kubelet[2639]: W0213 20:01:54.268133 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.268865 kubelet[2639]: E0213 20:01:54.268537 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.268865 kubelet[2639]: E0213 20:01:54.268867 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.268865 kubelet[2639]: W0213 20:01:54.268876 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.269424 kubelet[2639]: E0213 20:01:54.268888 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.271976 kubelet[2639]: E0213 20:01:54.271939 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:01:54.271976 kubelet[2639]: W0213 20:01:54.271962 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:01:54.271976 kubelet[2639]: E0213 20:01:54.271980 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:01:54.274544 containerd[1466]: time="2025-02-13T20:01:54.274303845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:01:54.274544 containerd[1466]: time="2025-02-13T20:01:54.274414943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:01:54.274544 containerd[1466]: time="2025-02-13T20:01:54.274465027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:54.275581 containerd[1466]: time="2025-02-13T20:01:54.275521552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:54.307302 systemd[1]: Started cri-containerd-79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35.scope - libcontainer container 79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35. Feb 13 20:01:54.309200 kubelet[2639]: E0213 20:01:54.309168 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:54.310829 containerd[1466]: time="2025-02-13T20:01:54.310793063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-prrfp,Uid:fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947,Namespace:calico-system,Attempt:0,}" Feb 13 20:01:54.344577 containerd[1466]: time="2025-02-13T20:01:54.344443801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:01:54.344577 containerd[1466]: time="2025-02-13T20:01:54.344493914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:01:54.344577 containerd[1466]: time="2025-02-13T20:01:54.344507811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:54.345907 containerd[1466]: time="2025-02-13T20:01:54.345596115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:01:54.369446 systemd[1]: Started cri-containerd-04447f200bf5261fa831e8ba2aadbd40249d92c7558b87dc8c57d0b418989c0b.scope - libcontainer container 04447f200bf5261fa831e8ba2aadbd40249d92c7558b87dc8c57d0b418989c0b. Feb 13 20:01:54.369955 containerd[1466]: time="2025-02-13T20:01:54.369923660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c9768df96-s7xhn,Uid:7b5364b4-50fd-4ce1-b857-5ce18dacc684,Namespace:calico-system,Attempt:0,} returns sandbox id \"79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35\"" Feb 13 20:01:54.371017 kubelet[2639]: E0213 20:01:54.370986 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:54.372581 containerd[1466]: time="2025-02-13T20:01:54.372537379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:01:54.395514 containerd[1466]: time="2025-02-13T20:01:54.395459224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-prrfp,Uid:fa9c4d8f-fd1a-4ae4-9fa1-3fae42f83947,Namespace:calico-system,Attempt:0,} returns sandbox id \"04447f200bf5261fa831e8ba2aadbd40249d92c7558b87dc8c57d0b418989c0b\"" Feb 13 20:01:54.396297 kubelet[2639]: E0213 20:01:54.396273 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:01:56.022369 kubelet[2639]: E0213 20:01:56.022316 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:01:58.023137 kubelet[2639]: E0213 20:01:58.023065 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:01:59.596981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726722690.mount: Deactivated successfully. Feb 13 20:02:00.023028 kubelet[2639]: E0213 20:02:00.022975 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:02:00.162361 containerd[1466]: time="2025-02-13T20:02:00.162298857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:00.163825 containerd[1466]: time="2025-02-13T20:02:00.163754369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 20:02:00.165299 containerd[1466]: time="2025-02-13T20:02:00.165257420Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:00.170465 containerd[1466]: time="2025-02-13T20:02:00.170413529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:00.171077 containerd[1466]: time="2025-02-13T20:02:00.171019677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 5.798434147s" Feb 13 20:02:00.171077 containerd[1466]: time="2025-02-13T20:02:00.171071825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:02:00.178725 containerd[1466]: time="2025-02-13T20:02:00.178671151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:02:00.195072 containerd[1466]: time="2025-02-13T20:02:00.194942116Z" level=info msg="CreateContainer within sandbox \"79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:02:00.212392 containerd[1466]: time="2025-02-13T20:02:00.212346509Z" level=info msg="CreateContainer within sandbox \"79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\"" Feb 13 20:02:00.215890 containerd[1466]: time="2025-02-13T20:02:00.215865104Z" level=info msg="StartContainer for \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\"" Feb 13 20:02:00.244361 systemd[1]: Started cri-containerd-dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9.scope - libcontainer container dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9. Feb 13 20:02:00.381843 containerd[1466]: time="2025-02-13T20:02:00.381696223Z" level=info msg="StartContainer for \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\" returns successfully" Feb 13 20:02:00.808535 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:47886.service - OpenSSH per-connection server daemon (10.0.0.1:47886). Feb 13 20:02:00.846624 sshd[3214]: Accepted publickey for core from 10.0.0.1 port 47886 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:00.848305 sshd[3214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:00.853136 systemd-logind[1454]: New session 10 of user core. Feb 13 20:02:00.865274 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:02:00.973950 sshd[3214]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:00.978386 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:47886.service: Deactivated successfully. Feb 13 20:02:00.980422 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:02:00.981078 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:02:00.981928 systemd-logind[1454]: Removed session 10. Feb 13 20:02:01.084961 kubelet[2639]: E0213 20:02:01.084851 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:01.093348 kubelet[2639]: I0213 20:02:01.093289 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c9768df96-s7xhn" podStartSLOduration=2.29353485 podStartE2EDuration="8.093271663s" podCreationTimestamp="2025-02-13 20:01:53 +0000 UTC" firstStartedPulling="2025-02-13 20:01:54.372134022 +0000 UTC m=+21.430870933" lastFinishedPulling="2025-02-13 20:02:00.171870815 +0000 UTC m=+27.230607746" observedRunningTime="2025-02-13 20:02:01.092924972 +0000 UTC m=+28.151661903" watchObservedRunningTime="2025-02-13 20:02:01.093271663 +0000 UTC m=+28.152008574" Feb 13 20:02:01.178638 kubelet[2639]: E0213 20:02:01.178586 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.178638 kubelet[2639]: W0213 20:02:01.178628 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.178768 kubelet[2639]: E0213 20:02:01.178658 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.179046 kubelet[2639]: E0213 20:02:01.179021 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.179046 kubelet[2639]: W0213 20:02:01.179038 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.179129 kubelet[2639]: E0213 20:02:01.179049 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.179371 kubelet[2639]: E0213 20:02:01.179347 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.179371 kubelet[2639]: W0213 20:02:01.179364 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.179435 kubelet[2639]: E0213 20:02:01.179377 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.179643 kubelet[2639]: E0213 20:02:01.179625 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.179643 kubelet[2639]: W0213 20:02:01.179641 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.179698 kubelet[2639]: E0213 20:02:01.179653 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.179903 kubelet[2639]: E0213 20:02:01.179889 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.179903 kubelet[2639]: W0213 20:02:01.179900 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.179970 kubelet[2639]: E0213 20:02:01.179910 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.180129 kubelet[2639]: E0213 20:02:01.180114 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.180129 kubelet[2639]: W0213 20:02:01.180126 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.180190 kubelet[2639]: E0213 20:02:01.180134 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.180425 kubelet[2639]: E0213 20:02:01.180388 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.180425 kubelet[2639]: W0213 20:02:01.180415 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.180480 kubelet[2639]: E0213 20:02:01.180425 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.180682 kubelet[2639]: E0213 20:02:01.180665 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.180709 kubelet[2639]: W0213 20:02:01.180680 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.180709 kubelet[2639]: E0213 20:02:01.180691 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.180941 kubelet[2639]: E0213 20:02:01.180925 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.180976 kubelet[2639]: W0213 20:02:01.180940 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.180976 kubelet[2639]: E0213 20:02:01.180952 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.181183 kubelet[2639]: E0213 20:02:01.181168 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.181183 kubelet[2639]: W0213 20:02:01.181180 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.181239 kubelet[2639]: E0213 20:02:01.181189 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.181413 kubelet[2639]: E0213 20:02:01.181399 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.181413 kubelet[2639]: W0213 20:02:01.181412 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.181474 kubelet[2639]: E0213 20:02:01.181420 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.181626 kubelet[2639]: E0213 20:02:01.181613 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.181626 kubelet[2639]: W0213 20:02:01.181623 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.181670 kubelet[2639]: E0213 20:02:01.181630 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.181843 kubelet[2639]: E0213 20:02:01.181830 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.181843 kubelet[2639]: W0213 20:02:01.181840 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.181890 kubelet[2639]: E0213 20:02:01.181848 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.182060 kubelet[2639]: E0213 20:02:01.182047 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.182060 kubelet[2639]: W0213 20:02:01.182058 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.182128 kubelet[2639]: E0213 20:02:01.182067 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.182295 kubelet[2639]: E0213 20:02:01.182280 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.182295 kubelet[2639]: W0213 20:02:01.182291 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.182351 kubelet[2639]: E0213 20:02:01.182299 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.196728 kubelet[2639]: E0213 20:02:01.196666 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.196728 kubelet[2639]: W0213 20:02:01.196695 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.196728 kubelet[2639]: E0213 20:02:01.196717 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.197060 kubelet[2639]: E0213 20:02:01.197034 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.197060 kubelet[2639]: W0213 20:02:01.197048 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.197142 kubelet[2639]: E0213 20:02:01.197064 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.197371 kubelet[2639]: E0213 20:02:01.197353 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.197371 kubelet[2639]: W0213 20:02:01.197366 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.197438 kubelet[2639]: E0213 20:02:01.197386 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.197690 kubelet[2639]: E0213 20:02:01.197673 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.197690 kubelet[2639]: W0213 20:02:01.197685 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.197745 kubelet[2639]: E0213 20:02:01.197703 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.197938 kubelet[2639]: E0213 20:02:01.197921 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.197938 kubelet[2639]: W0213 20:02:01.197934 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.197991 kubelet[2639]: E0213 20:02:01.197949 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.198197 kubelet[2639]: E0213 20:02:01.198179 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.198197 kubelet[2639]: W0213 20:02:01.198192 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.198272 kubelet[2639]: E0213 20:02:01.198224 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.198409 kubelet[2639]: E0213 20:02:01.198385 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.198409 kubelet[2639]: W0213 20:02:01.198403 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.198463 kubelet[2639]: E0213 20:02:01.198435 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.198605 kubelet[2639]: E0213 20:02:01.198589 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.198605 kubelet[2639]: W0213 20:02:01.198599 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.198671 kubelet[2639]: E0213 20:02:01.198624 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.198787 kubelet[2639]: E0213 20:02:01.198772 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.198787 kubelet[2639]: W0213 20:02:01.198783 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.198848 kubelet[2639]: E0213 20:02:01.198799 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.199141 kubelet[2639]: E0213 20:02:01.199090 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.199183 kubelet[2639]: W0213 20:02:01.199137 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.199183 kubelet[2639]: E0213 20:02:01.199178 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.199406 kubelet[2639]: E0213 20:02:01.199382 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.199406 kubelet[2639]: W0213 20:02:01.199401 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.199458 kubelet[2639]: E0213 20:02:01.199416 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.199662 kubelet[2639]: E0213 20:02:01.199648 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.199662 kubelet[2639]: W0213 20:02:01.199658 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.199717 kubelet[2639]: E0213 20:02:01.199674 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.200005 kubelet[2639]: E0213 20:02:01.199978 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.200005 kubelet[2639]: W0213 20:02:01.199996 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.200050 kubelet[2639]: E0213 20:02:01.200013 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.200316 kubelet[2639]: E0213 20:02:01.200277 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.200316 kubelet[2639]: W0213 20:02:01.200303 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.200530 kubelet[2639]: E0213 20:02:01.200345 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.200579 kubelet[2639]: E0213 20:02:01.200561 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.200579 kubelet[2639]: W0213 20:02:01.200575 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.200650 kubelet[2639]: E0213 20:02:01.200606 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.200790 kubelet[2639]: E0213 20:02:01.200772 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.200790 kubelet[2639]: W0213 20:02:01.200782 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.200862 kubelet[2639]: E0213 20:02:01.200801 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.201076 kubelet[2639]: E0213 20:02:01.201053 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.201076 kubelet[2639]: W0213 20:02:01.201064 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.201076 kubelet[2639]: E0213 20:02:01.201073 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:01.201538 kubelet[2639]: E0213 20:02:01.201521 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:01.201538 kubelet[2639]: W0213 20:02:01.201531 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:01.201538 kubelet[2639]: E0213 20:02:01.201540 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.023285 kubelet[2639]: E0213 20:02:02.023225 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:02:02.086116 kubelet[2639]: I0213 20:02:02.086074 2639 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:02:02.086741 kubelet[2639]: E0213 20:02:02.086715 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:02.088821 kubelet[2639]: E0213 20:02:02.088775 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.088821 kubelet[2639]: W0213 20:02:02.088806 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.088821 kubelet[2639]: E0213 20:02:02.088829 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.089060 kubelet[2639]: E0213 20:02:02.089046 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.089085 kubelet[2639]: W0213 20:02:02.089058 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.089085 kubelet[2639]: E0213 20:02:02.089079 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.089321 kubelet[2639]: E0213 20:02:02.089309 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.089361 kubelet[2639]: W0213 20:02:02.089321 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.089361 kubelet[2639]: E0213 20:02:02.089332 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.089640 kubelet[2639]: E0213 20:02:02.089627 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.089640 kubelet[2639]: W0213 20:02:02.089640 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.089704 kubelet[2639]: E0213 20:02:02.089650 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.089889 kubelet[2639]: E0213 20:02:02.089877 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.089889 kubelet[2639]: W0213 20:02:02.089887 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.089948 kubelet[2639]: E0213 20:02:02.089895 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.090123 kubelet[2639]: E0213 20:02:02.090088 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.090159 kubelet[2639]: W0213 20:02:02.090123 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.090159 kubelet[2639]: E0213 20:02:02.090136 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.090354 kubelet[2639]: E0213 20:02:02.090343 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.090393 kubelet[2639]: W0213 20:02:02.090354 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.090393 kubelet[2639]: E0213 20:02:02.090363 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.090619 kubelet[2639]: E0213 20:02:02.090582 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.090619 kubelet[2639]: W0213 20:02:02.090594 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.090619 kubelet[2639]: E0213 20:02:02.090603 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.090856 kubelet[2639]: E0213 20:02:02.090839 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.090856 kubelet[2639]: W0213 20:02:02.090851 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.090918 kubelet[2639]: E0213 20:02:02.090860 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.091137 kubelet[2639]: E0213 20:02:02.091124 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.091137 kubelet[2639]: W0213 20:02:02.091135 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.091261 kubelet[2639]: E0213 20:02:02.091146 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.091377 kubelet[2639]: E0213 20:02:02.091356 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.091412 kubelet[2639]: W0213 20:02:02.091376 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.091412 kubelet[2639]: E0213 20:02:02.091386 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.091611 kubelet[2639]: E0213 20:02:02.091600 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.091632 kubelet[2639]: W0213 20:02:02.091609 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.091632 kubelet[2639]: E0213 20:02:02.091620 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.091833 kubelet[2639]: E0213 20:02:02.091822 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.091865 kubelet[2639]: W0213 20:02:02.091832 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.091865 kubelet[2639]: E0213 20:02:02.091843 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.092060 kubelet[2639]: E0213 20:02:02.092049 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.092060 kubelet[2639]: W0213 20:02:02.092058 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.092123 kubelet[2639]: E0213 20:02:02.092066 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.092287 kubelet[2639]: E0213 20:02:02.092276 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.092287 kubelet[2639]: W0213 20:02:02.092285 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.092338 kubelet[2639]: E0213 20:02:02.092294 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.102778 kubelet[2639]: E0213 20:02:02.102739 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.102778 kubelet[2639]: W0213 20:02:02.102764 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.102778 kubelet[2639]: E0213 20:02:02.102784 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.103027 kubelet[2639]: E0213 20:02:02.103000 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.103027 kubelet[2639]: W0213 20:02:02.103012 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.103027 kubelet[2639]: E0213 20:02:02.103025 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.103349 kubelet[2639]: E0213 20:02:02.103327 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.103349 kubelet[2639]: W0213 20:02:02.103348 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.103427 kubelet[2639]: E0213 20:02:02.103384 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.103591 kubelet[2639]: E0213 20:02:02.103572 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.103591 kubelet[2639]: W0213 20:02:02.103581 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.103672 kubelet[2639]: E0213 20:02:02.103592 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.103805 kubelet[2639]: E0213 20:02:02.103790 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.103805 kubelet[2639]: W0213 20:02:02.103799 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.103876 kubelet[2639]: E0213 20:02:02.103811 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.104030 kubelet[2639]: E0213 20:02:02.104009 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.104030 kubelet[2639]: W0213 20:02:02.104023 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.104109 kubelet[2639]: E0213 20:02:02.104037 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.104316 kubelet[2639]: E0213 20:02:02.104299 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.104316 kubelet[2639]: W0213 20:02:02.104309 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.104403 kubelet[2639]: E0213 20:02:02.104342 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.104536 kubelet[2639]: E0213 20:02:02.104520 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.104536 kubelet[2639]: W0213 20:02:02.104531 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.104607 kubelet[2639]: E0213 20:02:02.104561 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.104765 kubelet[2639]: E0213 20:02:02.104749 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.104765 kubelet[2639]: W0213 20:02:02.104758 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.104829 kubelet[2639]: E0213 20:02:02.104771 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.105066 kubelet[2639]: E0213 20:02:02.105042 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.105066 kubelet[2639]: W0213 20:02:02.105055 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.105171 kubelet[2639]: E0213 20:02:02.105071 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.105323 kubelet[2639]: E0213 20:02:02.105309 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.105323 kubelet[2639]: W0213 20:02:02.105320 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.105405 kubelet[2639]: E0213 20:02:02.105336 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.105572 kubelet[2639]: E0213 20:02:02.105553 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.105572 kubelet[2639]: W0213 20:02:02.105565 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.105648 kubelet[2639]: E0213 20:02:02.105579 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.105800 kubelet[2639]: E0213 20:02:02.105785 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.105800 kubelet[2639]: W0213 20:02:02.105794 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.105865 kubelet[2639]: E0213 20:02:02.105808 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.106009 kubelet[2639]: E0213 20:02:02.105993 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.106009 kubelet[2639]: W0213 20:02:02.106003 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.106086 kubelet[2639]: E0213 20:02:02.106016 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.106248 kubelet[2639]: E0213 20:02:02.106233 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.106248 kubelet[2639]: W0213 20:02:02.106243 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.106308 kubelet[2639]: E0213 20:02:02.106255 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.106559 kubelet[2639]: E0213 20:02:02.106542 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.106559 kubelet[2639]: W0213 20:02:02.106555 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.106640 kubelet[2639]: E0213 20:02:02.106575 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.106858 kubelet[2639]: E0213 20:02:02.106838 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.106858 kubelet[2639]: W0213 20:02:02.106850 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.106918 kubelet[2639]: E0213 20:02:02.106863 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.107082 kubelet[2639]: E0213 20:02:02.107068 2639 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:02:02.107082 kubelet[2639]: W0213 20:02:02.107079 2639 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:02:02.107145 kubelet[2639]: E0213 20:02:02.107087 2639 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:02:02.621558 containerd[1466]: time="2025-02-13T20:02:02.621504608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:02.622413 containerd[1466]: time="2025-02-13T20:02:02.622367628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 20:02:02.623651 containerd[1466]: time="2025-02-13T20:02:02.623621983Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:02.625960 containerd[1466]: time="2025-02-13T20:02:02.625929223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:02.626578 containerd[1466]: time="2025-02-13T20:02:02.626542594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.447819084s" Feb 13 20:02:02.626613 containerd[1466]: time="2025-02-13T20:02:02.626577590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:02:02.628839 containerd[1466]: time="2025-02-13T20:02:02.628809710Z" level=info msg="CreateContainer within sandbox \"04447f200bf5261fa831e8ba2aadbd40249d92c7558b87dc8c57d0b418989c0b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:02:02.644933 containerd[1466]: time="2025-02-13T20:02:02.644886596Z" level=info msg="CreateContainer within sandbox \"04447f200bf5261fa831e8ba2aadbd40249d92c7558b87dc8c57d0b418989c0b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a16440eccb2e5494fdfc5face66b672fd156fe0b479d878f0510e48728aa6a59\"" Feb 13 20:02:02.648831 containerd[1466]: time="2025-02-13T20:02:02.648205456Z" level=info msg="StartContainer for \"a16440eccb2e5494fdfc5face66b672fd156fe0b479d878f0510e48728aa6a59\"" Feb 13 20:02:02.683323 systemd[1]: Started cri-containerd-a16440eccb2e5494fdfc5face66b672fd156fe0b479d878f0510e48728aa6a59.scope - libcontainer container a16440eccb2e5494fdfc5face66b672fd156fe0b479d878f0510e48728aa6a59. Feb 13 20:02:02.723267 systemd[1]: cri-containerd-a16440eccb2e5494fdfc5face66b672fd156fe0b479d878f0510e48728aa6a59.scope: Deactivated successfully. Feb 13 20:02:02.994539 containerd[1466]: time="2025-02-13T20:02:02.994419605Z" level=info msg="StartContainer for \"a16440eccb2e5494fdfc5face66b672fd156fe0b479d878f0510e48728aa6a59\" returns successfully" Feb 13 20:02:03.014576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a16440eccb2e5494fdfc5face66b672fd156fe0b479d878f0510e48728aa6a59-rootfs.mount: Deactivated successfully. Feb 13 20:02:03.089178 kubelet[2639]: E0213 20:02:03.089139 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:03.227189 containerd[1466]: time="2025-02-13T20:02:03.227086551Z" level=info msg="shim disconnected" id=a16440eccb2e5494fdfc5face66b672fd156fe0b479d878f0510e48728aa6a59 namespace=k8s.io Feb 13 20:02:03.227189 containerd[1466]: time="2025-02-13T20:02:03.227171861Z" level=warning msg="cleaning up after shim disconnected" id=a16440eccb2e5494fdfc5face66b672fd156fe0b479d878f0510e48728aa6a59 namespace=k8s.io Feb 13 20:02:03.227189 containerd[1466]: time="2025-02-13T20:02:03.227180897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:02:04.023237 kubelet[2639]: E0213 20:02:04.023194 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:02:04.092813 kubelet[2639]: E0213 20:02:04.092045 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:04.095340 containerd[1466]: time="2025-02-13T20:02:04.095290790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:02:05.986573 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:47896.service - OpenSSH per-connection server daemon (10.0.0.1:47896). Feb 13 20:02:06.022976 kubelet[2639]: E0213 20:02:06.022919 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:02:06.024125 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 47896 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:06.026081 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:06.030143 systemd-logind[1454]: New session 11 of user core. Feb 13 20:02:06.036309 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:02:06.165213 sshd[3393]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:06.170399 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:47896.service: Deactivated successfully. Feb 13 20:02:06.172562 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:02:06.173455 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:02:06.174623 systemd-logind[1454]: Removed session 11. Feb 13 20:02:08.022750 kubelet[2639]: E0213 20:02:08.022646 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:02:09.024733 containerd[1466]: time="2025-02-13T20:02:09.024677637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:09.025849 containerd[1466]: time="2025-02-13T20:02:09.025802267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:02:09.027202 containerd[1466]: time="2025-02-13T20:02:09.027156328Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:09.029760 containerd[1466]: time="2025-02-13T20:02:09.029701292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:09.030583 containerd[1466]: time="2025-02-13T20:02:09.030525629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.935180938s" Feb 13 20:02:09.030583 containerd[1466]: time="2025-02-13T20:02:09.030564352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:02:09.032616 containerd[1466]: time="2025-02-13T20:02:09.032579613Z" level=info msg="CreateContainer within sandbox \"04447f200bf5261fa831e8ba2aadbd40249d92c7558b87dc8c57d0b418989c0b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:02:09.049492 containerd[1466]: time="2025-02-13T20:02:09.049425871Z" level=info msg="CreateContainer within sandbox \"04447f200bf5261fa831e8ba2aadbd40249d92c7558b87dc8c57d0b418989c0b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c9d80ee8b0ec0df7c17b0b6356d06ad052209e0047ae93406b4de867d5439a8d\"" Feb 13 20:02:09.050039 containerd[1466]: time="2025-02-13T20:02:09.049996342Z" level=info msg="StartContainer for \"c9d80ee8b0ec0df7c17b0b6356d06ad052209e0047ae93406b4de867d5439a8d\"" Feb 13 20:02:09.084265 systemd[1]: Started cri-containerd-c9d80ee8b0ec0df7c17b0b6356d06ad052209e0047ae93406b4de867d5439a8d.scope - libcontainer container c9d80ee8b0ec0df7c17b0b6356d06ad052209e0047ae93406b4de867d5439a8d. Feb 13 20:02:09.169287 containerd[1466]: time="2025-02-13T20:02:09.169223714Z" level=info msg="StartContainer for \"c9d80ee8b0ec0df7c17b0b6356d06ad052209e0047ae93406b4de867d5439a8d\" returns successfully" Feb 13 20:02:10.022432 kubelet[2639]: E0213 20:02:10.022384 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:02:10.105171 kubelet[2639]: E0213 20:02:10.105115 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:10.831196 containerd[1466]: time="2025-02-13T20:02:10.831149762Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:02:10.834231 systemd[1]: cri-containerd-c9d80ee8b0ec0df7c17b0b6356d06ad052209e0047ae93406b4de867d5439a8d.scope: Deactivated successfully. Feb 13 20:02:10.849957 kubelet[2639]: I0213 20:02:10.849794 2639 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:02:10.855727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9d80ee8b0ec0df7c17b0b6356d06ad052209e0047ae93406b4de867d5439a8d-rootfs.mount: Deactivated successfully. Feb 13 20:02:10.879974 kubelet[2639]: I0213 20:02:10.879912 2639 topology_manager.go:215] "Topology Admit Handler" podUID="3184b51f-d611-4fe4-a9e8-2390a1c90a5a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pnbb8" Feb 13 20:02:10.884840 kubelet[2639]: I0213 20:02:10.883523 2639 topology_manager.go:215] "Topology Admit Handler" podUID="49be02d3-1172-42eb-afb4-696e59a6f97d" podNamespace="calico-system" podName="calico-kube-controllers-7866456b95-h7zp8" Feb 13 20:02:10.884840 kubelet[2639]: I0213 20:02:10.884748 2639 topology_manager.go:215] "Topology Admit Handler" podUID="ae5d8c7e-8ef1-493b-98f0-c5400cc3d726" podNamespace="calico-apiserver" podName="calico-apiserver-7f5ffc687c-5bd4w" Feb 13 20:02:10.884840 kubelet[2639]: I0213 20:02:10.884833 2639 topology_manager.go:215] "Topology Admit Handler" podUID="9053966c-63f0-49bc-a5f7-8526a79f7772" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x9rtf" Feb 13 20:02:10.885655 kubelet[2639]: I0213 20:02:10.885629 2639 topology_manager.go:215] "Topology Admit Handler" podUID="1fd0a0c8-d966-43b0-a129-ee14e139b86b" podNamespace="calico-apiserver" podName="calico-apiserver-7f5ffc687c-lbc64" Feb 13 20:02:10.887956 kubelet[2639]: W0213 20:02:10.887043 2639 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Feb 13 20:02:10.887956 kubelet[2639]: E0213 20:02:10.887077 2639 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Feb 13 20:02:10.892337 systemd[1]: Created slice kubepods-burstable-pod3184b51f_d611_4fe4_a9e8_2390a1c90a5a.slice - libcontainer container kubepods-burstable-pod3184b51f_d611_4fe4_a9e8_2390a1c90a5a.slice. Feb 13 20:02:10.899704 systemd[1]: Created slice kubepods-besteffort-pod49be02d3_1172_42eb_afb4_696e59a6f97d.slice - libcontainer container kubepods-besteffort-pod49be02d3_1172_42eb_afb4_696e59a6f97d.slice. Feb 13 20:02:10.904435 systemd[1]: Created slice kubepods-burstable-pod9053966c_63f0_49bc_a5f7_8526a79f7772.slice - libcontainer container kubepods-burstable-pod9053966c_63f0_49bc_a5f7_8526a79f7772.slice. Feb 13 20:02:10.910149 systemd[1]: Created slice kubepods-besteffort-podae5d8c7e_8ef1_493b_98f0_c5400cc3d726.slice - libcontainer container kubepods-besteffort-podae5d8c7e_8ef1_493b_98f0_c5400cc3d726.slice. Feb 13 20:02:10.914836 systemd[1]: Created slice kubepods-besteffort-pod1fd0a0c8_d966_43b0_a129_ee14e139b86b.slice - libcontainer container kubepods-besteffort-pod1fd0a0c8_d966_43b0_a129_ee14e139b86b.slice. Feb 13 20:02:10.968149 kubelet[2639]: I0213 20:02:10.968003 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1fd0a0c8-d966-43b0-a129-ee14e139b86b-calico-apiserver-certs\") pod \"calico-apiserver-7f5ffc687c-lbc64\" (UID: \"1fd0a0c8-d966-43b0-a129-ee14e139b86b\") " pod="calico-apiserver/calico-apiserver-7f5ffc687c-lbc64" Feb 13 20:02:10.968149 kubelet[2639]: I0213 20:02:10.968073 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49be02d3-1172-42eb-afb4-696e59a6f97d-tigera-ca-bundle\") pod \"calico-kube-controllers-7866456b95-h7zp8\" (UID: \"49be02d3-1172-42eb-afb4-696e59a6f97d\") " pod="calico-system/calico-kube-controllers-7866456b95-h7zp8" Feb 13 20:02:10.968149 kubelet[2639]: I0213 20:02:10.968113 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9053966c-63f0-49bc-a5f7-8526a79f7772-config-volume\") pod \"coredns-7db6d8ff4d-x9rtf\" (UID: \"9053966c-63f0-49bc-a5f7-8526a79f7772\") " pod="kube-system/coredns-7db6d8ff4d-x9rtf" Feb 13 20:02:10.968489 kubelet[2639]: I0213 20:02:10.968172 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjp58\" (UniqueName: \"kubernetes.io/projected/ae5d8c7e-8ef1-493b-98f0-c5400cc3d726-kube-api-access-mjp58\") pod \"calico-apiserver-7f5ffc687c-5bd4w\" (UID: \"ae5d8c7e-8ef1-493b-98f0-c5400cc3d726\") " pod="calico-apiserver/calico-apiserver-7f5ffc687c-5bd4w" Feb 13 20:02:10.968489 kubelet[2639]: I0213 20:02:10.968197 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpfsz\" (UniqueName: \"kubernetes.io/projected/3184b51f-d611-4fe4-a9e8-2390a1c90a5a-kube-api-access-xpfsz\") pod \"coredns-7db6d8ff4d-pnbb8\" (UID: \"3184b51f-d611-4fe4-a9e8-2390a1c90a5a\") " pod="kube-system/coredns-7db6d8ff4d-pnbb8" Feb 13 20:02:10.968489 kubelet[2639]: I0213 20:02:10.968219 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9n54\" (UniqueName: \"kubernetes.io/projected/49be02d3-1172-42eb-afb4-696e59a6f97d-kube-api-access-p9n54\") pod \"calico-kube-controllers-7866456b95-h7zp8\" (UID: \"49be02d3-1172-42eb-afb4-696e59a6f97d\") " pod="calico-system/calico-kube-controllers-7866456b95-h7zp8" Feb 13 20:02:10.968489 kubelet[2639]: I0213 20:02:10.968246 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ae5d8c7e-8ef1-493b-98f0-c5400cc3d726-calico-apiserver-certs\") pod \"calico-apiserver-7f5ffc687c-5bd4w\" (UID: \"ae5d8c7e-8ef1-493b-98f0-c5400cc3d726\") " pod="calico-apiserver/calico-apiserver-7f5ffc687c-5bd4w" Feb 13 20:02:10.968489 kubelet[2639]: I0213 20:02:10.968267 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjqvm\" (UniqueName: \"kubernetes.io/projected/1fd0a0c8-d966-43b0-a129-ee14e139b86b-kube-api-access-sjqvm\") pod \"calico-apiserver-7f5ffc687c-lbc64\" (UID: \"1fd0a0c8-d966-43b0-a129-ee14e139b86b\") " pod="calico-apiserver/calico-apiserver-7f5ffc687c-lbc64" Feb 13 20:02:10.968622 kubelet[2639]: I0213 20:02:10.968287 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqbqb\" (UniqueName: \"kubernetes.io/projected/9053966c-63f0-49bc-a5f7-8526a79f7772-kube-api-access-wqbqb\") pod \"coredns-7db6d8ff4d-x9rtf\" (UID: \"9053966c-63f0-49bc-a5f7-8526a79f7772\") " pod="kube-system/coredns-7db6d8ff4d-x9rtf" Feb 13 20:02:10.968622 kubelet[2639]: I0213 20:02:10.968309 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3184b51f-d611-4fe4-a9e8-2390a1c90a5a-config-volume\") pod \"coredns-7db6d8ff4d-pnbb8\" (UID: \"3184b51f-d611-4fe4-a9e8-2390a1c90a5a\") " pod="kube-system/coredns-7db6d8ff4d-pnbb8" Feb 13 20:02:10.969804 kubelet[2639]: W0213 20:02:10.969786 2639 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Feb 13 20:02:10.969862 kubelet[2639]: E0213 20:02:10.969818 2639 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Feb 13 20:02:11.106404 kubelet[2639]: E0213 20:02:11.106298 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:11.177661 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:36150.service - OpenSSH per-connection server daemon (10.0.0.1:36150). Feb 13 20:02:11.192044 containerd[1466]: time="2025-02-13T20:02:11.191983469Z" level=info msg="shim disconnected" id=c9d80ee8b0ec0df7c17b0b6356d06ad052209e0047ae93406b4de867d5439a8d namespace=k8s.io Feb 13 20:02:11.192044 containerd[1466]: time="2025-02-13T20:02:11.192040155Z" level=warning msg="cleaning up after shim disconnected" id=c9d80ee8b0ec0df7c17b0b6356d06ad052209e0047ae93406b4de867d5439a8d namespace=k8s.io Feb 13 20:02:11.192206 containerd[1466]: time="2025-02-13T20:02:11.192049202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:02:11.196328 kubelet[2639]: E0213 20:02:11.196289 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:11.197412 containerd[1466]: time="2025-02-13T20:02:11.196982638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnbb8,Uid:3184b51f-d611-4fe4-a9e8-2390a1c90a5a,Namespace:kube-system,Attempt:0,}" Feb 13 20:02:11.202515 containerd[1466]: time="2025-02-13T20:02:11.202475413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7866456b95-h7zp8,Uid:49be02d3-1172-42eb-afb4-696e59a6f97d,Namespace:calico-system,Attempt:0,}" Feb 13 20:02:11.206873 kubelet[2639]: E0213 20:02:11.206853 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:11.207360 containerd[1466]: time="2025-02-13T20:02:11.207323549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x9rtf,Uid:9053966c-63f0-49bc-a5f7-8526a79f7772,Namespace:kube-system,Attempt:0,}" Feb 13 20:02:11.313406 sshd[3468]: Accepted publickey for core from 10.0.0.1 port 36150 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:11.314829 sshd[3468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:11.323933 systemd-logind[1454]: New session 12 of user core. Feb 13 20:02:11.328451 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:02:11.362529 containerd[1466]: time="2025-02-13T20:02:11.362414477Z" level=error msg="Failed to destroy network for sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.363457 containerd[1466]: time="2025-02-13T20:02:11.363431104Z" level=error msg="encountered an error cleaning up failed sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.363561 containerd[1466]: time="2025-02-13T20:02:11.363540940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnbb8,Uid:3184b51f-d611-4fe4-a9e8-2390a1c90a5a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.363891 kubelet[2639]: E0213 20:02:11.363836 2639 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.363951 kubelet[2639]: E0213 20:02:11.363925 2639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pnbb8" Feb 13 20:02:11.363951 kubelet[2639]: E0213 20:02:11.363946 2639 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pnbb8" Feb 13 20:02:11.364016 kubelet[2639]: E0213 20:02:11.363991 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pnbb8_kube-system(3184b51f-d611-4fe4-a9e8-2390a1c90a5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pnbb8_kube-system(3184b51f-d611-4fe4-a9e8-2390a1c90a5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pnbb8" podUID="3184b51f-d611-4fe4-a9e8-2390a1c90a5a" Feb 13 20:02:11.367658 containerd[1466]: time="2025-02-13T20:02:11.367608652Z" level=error msg="Failed to destroy network for sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.368110 containerd[1466]: time="2025-02-13T20:02:11.368063976Z" level=error msg="encountered an error cleaning up failed sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.368197 containerd[1466]: time="2025-02-13T20:02:11.368168121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7866456b95-h7zp8,Uid:49be02d3-1172-42eb-afb4-696e59a6f97d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.368371 kubelet[2639]: E0213 20:02:11.368334 2639 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.368371 kubelet[2639]: E0213 20:02:11.368373 2639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7866456b95-h7zp8" Feb 13 20:02:11.368539 kubelet[2639]: E0213 20:02:11.368390 2639 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7866456b95-h7zp8" Feb 13 20:02:11.368539 kubelet[2639]: E0213 20:02:11.368425 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7866456b95-h7zp8_calico-system(49be02d3-1172-42eb-afb4-696e59a6f97d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7866456b95-h7zp8_calico-system(49be02d3-1172-42eb-afb4-696e59a6f97d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7866456b95-h7zp8" podUID="49be02d3-1172-42eb-afb4-696e59a6f97d" Feb 13 20:02:11.376252 containerd[1466]: time="2025-02-13T20:02:11.376159574Z" level=error msg="Failed to destroy network for sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.376555 containerd[1466]: time="2025-02-13T20:02:11.376523096Z" level=error msg="encountered an error cleaning up failed sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.376584 containerd[1466]: time="2025-02-13T20:02:11.376570625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x9rtf,Uid:9053966c-63f0-49bc-a5f7-8526a79f7772,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.376796 kubelet[2639]: E0213 20:02:11.376756 2639 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:11.376843 kubelet[2639]: E0213 20:02:11.376814 2639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x9rtf" Feb 13 20:02:11.376843 kubelet[2639]: E0213 20:02:11.376837 2639 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x9rtf" Feb 13 20:02:11.376909 kubelet[2639]: E0213 20:02:11.376882 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x9rtf_kube-system(9053966c-63f0-49bc-a5f7-8526a79f7772)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x9rtf_kube-system(9053966c-63f0-49bc-a5f7-8526a79f7772)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x9rtf" podUID="9053966c-63f0-49bc-a5f7-8526a79f7772" Feb 13 20:02:11.434269 sshd[3468]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:11.438323 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:36150.service: Deactivated successfully. Feb 13 20:02:11.440530 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:02:11.441300 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:02:11.442318 systemd-logind[1454]: Removed session 12. Feb 13 20:02:11.858629 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb-shm.mount: Deactivated successfully. Feb 13 20:02:12.028375 systemd[1]: Created slice kubepods-besteffort-pod2516eb1f_4a76_4950_92c4_3225425d63a6.slice - libcontainer container kubepods-besteffort-pod2516eb1f_4a76_4950_92c4_3225425d63a6.slice. Feb 13 20:02:12.030341 containerd[1466]: time="2025-02-13T20:02:12.030307877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfdrh,Uid:2516eb1f-4a76-4950-92c4-3225425d63a6,Namespace:calico-system,Attempt:0,}" Feb 13 20:02:12.086780 kubelet[2639]: E0213 20:02:12.086741 2639 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:02:12.086780 kubelet[2639]: E0213 20:02:12.086751 2639 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:02:12.086895 kubelet[2639]: E0213 20:02:12.086822 2639 projected.go:200] Error preparing data for projected volume kube-api-access-sjqvm for pod calico-apiserver/calico-apiserver-7f5ffc687c-lbc64: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:02:12.086895 kubelet[2639]: E0213 20:02:12.086775 2639 projected.go:200] Error preparing data for projected volume kube-api-access-mjp58 for pod calico-apiserver/calico-apiserver-7f5ffc687c-5bd4w: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:02:12.086937 kubelet[2639]: E0213 20:02:12.086900 2639 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1fd0a0c8-d966-43b0-a129-ee14e139b86b-kube-api-access-sjqvm podName:1fd0a0c8-d966-43b0-a129-ee14e139b86b nodeName:}" failed. No retries permitted until 2025-02-13 20:02:12.586877725 +0000 UTC m=+39.645614636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sjqvm" (UniqueName: "kubernetes.io/projected/1fd0a0c8-d966-43b0-a129-ee14e139b86b-kube-api-access-sjqvm") pod "calico-apiserver-7f5ffc687c-lbc64" (UID: "1fd0a0c8-d966-43b0-a129-ee14e139b86b") : failed to sync configmap cache: timed out waiting for the condition Feb 13 20:02:12.086999 kubelet[2639]: E0213 20:02:12.086948 2639 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae5d8c7e-8ef1-493b-98f0-c5400cc3d726-kube-api-access-mjp58 podName:ae5d8c7e-8ef1-493b-98f0-c5400cc3d726 nodeName:}" failed. No retries permitted until 2025-02-13 20:02:12.5869286 +0000 UTC m=+39.645665611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mjp58" (UniqueName: "kubernetes.io/projected/ae5d8c7e-8ef1-493b-98f0-c5400cc3d726-kube-api-access-mjp58") pod "calico-apiserver-7f5ffc687c-5bd4w" (UID: "ae5d8c7e-8ef1-493b-98f0-c5400cc3d726") : failed to sync configmap cache: timed out waiting for the condition Feb 13 20:02:12.108512 kubelet[2639]: I0213 20:02:12.108490 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:12.109139 containerd[1466]: time="2025-02-13T20:02:12.108987844Z" level=info msg="StopPodSandbox for \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\"" Feb 13 20:02:12.109263 containerd[1466]: time="2025-02-13T20:02:12.109167230Z" level=info msg="Ensure that sandbox e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf in task-service has been cleanup successfully" Feb 13 20:02:12.110310 kubelet[2639]: E0213 20:02:12.110285 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:12.111241 containerd[1466]: time="2025-02-13T20:02:12.110892728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:02:12.111292 kubelet[2639]: I0213 20:02:12.111034 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:12.111660 containerd[1466]: time="2025-02-13T20:02:12.111634880Z" level=info msg="StopPodSandbox for \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\"" Feb 13 20:02:12.111949 containerd[1466]: time="2025-02-13T20:02:12.111754765Z" level=info msg="Ensure that sandbox 1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893 in task-service has been cleanup successfully" Feb 13 20:02:12.112316 kubelet[2639]: I0213 20:02:12.112298 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:12.112839 containerd[1466]: time="2025-02-13T20:02:12.112753919Z" level=info msg="StopPodSandbox for \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\"" Feb 13 20:02:12.112973 containerd[1466]: time="2025-02-13T20:02:12.112899552Z" level=info msg="Ensure that sandbox 70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb in task-service has been cleanup successfully" Feb 13 20:02:12.144404 containerd[1466]: time="2025-02-13T20:02:12.144329747Z" level=error msg="StopPodSandbox for \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\" failed" error="failed to destroy network for sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.146124 kubelet[2639]: E0213 20:02:12.144541 2639 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:12.146124 kubelet[2639]: E0213 20:02:12.144612 2639 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893"} Feb 13 20:02:12.146124 kubelet[2639]: E0213 20:02:12.144666 2639 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9053966c-63f0-49bc-a5f7-8526a79f7772\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:02:12.146124 kubelet[2639]: E0213 20:02:12.144688 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9053966c-63f0-49bc-a5f7-8526a79f7772\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x9rtf" podUID="9053966c-63f0-49bc-a5f7-8526a79f7772" Feb 13 20:02:12.149498 containerd[1466]: time="2025-02-13T20:02:12.149452076Z" level=error msg="StopPodSandbox for \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\" failed" error="failed to destroy network for sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.149663 kubelet[2639]: E0213 20:02:12.149628 2639 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:12.149718 kubelet[2639]: E0213 20:02:12.149667 2639 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf"} Feb 13 20:02:12.149718 kubelet[2639]: E0213 20:02:12.149698 2639 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3184b51f-d611-4fe4-a9e8-2390a1c90a5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:02:12.149812 kubelet[2639]: E0213 20:02:12.149720 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3184b51f-d611-4fe4-a9e8-2390a1c90a5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pnbb8" podUID="3184b51f-d611-4fe4-a9e8-2390a1c90a5a" Feb 13 20:02:12.156491 containerd[1466]: time="2025-02-13T20:02:12.156438474Z" level=error msg="StopPodSandbox for \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\" failed" error="failed to destroy network for sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.156697 kubelet[2639]: E0213 20:02:12.156660 2639 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:12.156761 kubelet[2639]: E0213 20:02:12.156704 2639 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb"} Feb 13 20:02:12.156761 kubelet[2639]: E0213 20:02:12.156747 2639 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49be02d3-1172-42eb-afb4-696e59a6f97d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:02:12.156850 kubelet[2639]: E0213 20:02:12.156769 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49be02d3-1172-42eb-afb4-696e59a6f97d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7866456b95-h7zp8" podUID="49be02d3-1172-42eb-afb4-696e59a6f97d" Feb 13 20:02:12.189492 containerd[1466]: time="2025-02-13T20:02:12.189418906Z" level=error msg="Failed to destroy network for sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.189872 containerd[1466]: time="2025-02-13T20:02:12.189835838Z" level=error msg="encountered an error cleaning up failed sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.189910 containerd[1466]: time="2025-02-13T20:02:12.189890852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfdrh,Uid:2516eb1f-4a76-4950-92c4-3225425d63a6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.190189 kubelet[2639]: E0213 20:02:12.190137 2639 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.190189 kubelet[2639]: E0213 20:02:12.190192 2639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gfdrh" Feb 13 20:02:12.190344 kubelet[2639]: E0213 20:02:12.190214 2639 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gfdrh" Feb 13 20:02:12.190344 kubelet[2639]: E0213 20:02:12.190257 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gfdrh_calico-system(2516eb1f-4a76-4950-92c4-3225425d63a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gfdrh_calico-system(2516eb1f-4a76-4950-92c4-3225425d63a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:02:12.191689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa-shm.mount: Deactivated successfully. Feb 13 20:02:12.713474 containerd[1466]: time="2025-02-13T20:02:12.713426056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5ffc687c-5bd4w,Uid:ae5d8c7e-8ef1-493b-98f0-c5400cc3d726,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:02:12.717047 containerd[1466]: time="2025-02-13T20:02:12.716989572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5ffc687c-lbc64,Uid:1fd0a0c8-d966-43b0-a129-ee14e139b86b,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:02:12.781543 containerd[1466]: time="2025-02-13T20:02:12.781478496Z" level=error msg="Failed to destroy network for sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.781922 containerd[1466]: time="2025-02-13T20:02:12.781886551Z" level=error msg="encountered an error cleaning up failed sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.781972 containerd[1466]: time="2025-02-13T20:02:12.781945631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5ffc687c-5bd4w,Uid:ae5d8c7e-8ef1-493b-98f0-c5400cc3d726,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.782271 kubelet[2639]: E0213 20:02:12.782217 2639 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.782326 kubelet[2639]: E0213 20:02:12.782286 2639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5ffc687c-5bd4w" Feb 13 20:02:12.782326 kubelet[2639]: E0213 20:02:12.782306 2639 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5ffc687c-5bd4w" Feb 13 20:02:12.782384 kubelet[2639]: E0213 20:02:12.782348 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f5ffc687c-5bd4w_calico-apiserver(ae5d8c7e-8ef1-493b-98f0-c5400cc3d726)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f5ffc687c-5bd4w_calico-apiserver(ae5d8c7e-8ef1-493b-98f0-c5400cc3d726)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5ffc687c-5bd4w" podUID="ae5d8c7e-8ef1-493b-98f0-c5400cc3d726" Feb 13 20:02:12.786804 containerd[1466]: time="2025-02-13T20:02:12.786763872Z" level=error msg="Failed to destroy network for sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.787195 containerd[1466]: time="2025-02-13T20:02:12.787165725Z" level=error msg="encountered an error cleaning up failed sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.787256 containerd[1466]: time="2025-02-13T20:02:12.787217322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5ffc687c-lbc64,Uid:1fd0a0c8-d966-43b0-a129-ee14e139b86b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.787436 kubelet[2639]: E0213 20:02:12.787397 2639 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:12.787509 kubelet[2639]: E0213 20:02:12.787445 2639 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5ffc687c-lbc64" Feb 13 20:02:12.787509 kubelet[2639]: E0213 20:02:12.787462 2639 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f5ffc687c-lbc64" Feb 13 20:02:12.787509 kubelet[2639]: E0213 20:02:12.787494 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f5ffc687c-lbc64_calico-apiserver(1fd0a0c8-d966-43b0-a129-ee14e139b86b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f5ffc687c-lbc64_calico-apiserver(1fd0a0c8-d966-43b0-a129-ee14e139b86b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5ffc687c-lbc64" podUID="1fd0a0c8-d966-43b0-a129-ee14e139b86b" Feb 13 20:02:12.856514 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd-shm.mount: Deactivated successfully. Feb 13 20:02:13.114009 kubelet[2639]: I0213 20:02:13.113968 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:13.114644 containerd[1466]: time="2025-02-13T20:02:13.114611550Z" level=info msg="StopPodSandbox for \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\"" Feb 13 20:02:13.114939 containerd[1466]: time="2025-02-13T20:02:13.114786458Z" level=info msg="Ensure that sandbox 59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd in task-service has been cleanup successfully" Feb 13 20:02:13.115054 kubelet[2639]: I0213 20:02:13.115023 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:13.116267 containerd[1466]: time="2025-02-13T20:02:13.115458239Z" level=info msg="StopPodSandbox for \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\"" Feb 13 20:02:13.116267 containerd[1466]: time="2025-02-13T20:02:13.115607579Z" level=info msg="Ensure that sandbox cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869 in task-service has been cleanup successfully" Feb 13 20:02:13.116345 kubelet[2639]: I0213 20:02:13.116138 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:13.116533 containerd[1466]: time="2025-02-13T20:02:13.116505513Z" level=info msg="StopPodSandbox for \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\"" Feb 13 20:02:13.116666 containerd[1466]: time="2025-02-13T20:02:13.116646067Z" level=info msg="Ensure that sandbox 0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa in task-service has been cleanup successfully" Feb 13 20:02:13.148489 containerd[1466]: time="2025-02-13T20:02:13.148413832Z" level=error msg="StopPodSandbox for \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\" failed" error="failed to destroy network for sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:13.148624 containerd[1466]: time="2025-02-13T20:02:13.148501847Z" level=error msg="StopPodSandbox for \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\" failed" error="failed to destroy network for sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:13.148690 kubelet[2639]: E0213 20:02:13.148623 2639 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:13.148690 kubelet[2639]: E0213 20:02:13.148666 2639 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa"} Feb 13 20:02:13.148750 kubelet[2639]: E0213 20:02:13.148704 2639 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2516eb1f-4a76-4950-92c4-3225425d63a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:02:13.148816 kubelet[2639]: E0213 20:02:13.148728 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2516eb1f-4a76-4950-92c4-3225425d63a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gfdrh" podUID="2516eb1f-4a76-4950-92c4-3225425d63a6" Feb 13 20:02:13.149150 kubelet[2639]: E0213 20:02:13.148845 2639 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:13.149150 kubelet[2639]: E0213 20:02:13.148869 2639 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd"} Feb 13 20:02:13.149150 kubelet[2639]: E0213 20:02:13.148890 2639 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae5d8c7e-8ef1-493b-98f0-c5400cc3d726\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:02:13.149150 kubelet[2639]: E0213 20:02:13.148913 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae5d8c7e-8ef1-493b-98f0-c5400cc3d726\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5ffc687c-5bd4w" podUID="ae5d8c7e-8ef1-493b-98f0-c5400cc3d726" Feb 13 20:02:13.152350 containerd[1466]: time="2025-02-13T20:02:13.152317295Z" level=error msg="StopPodSandbox for \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\" failed" error="failed to destroy network for sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:02:13.152485 kubelet[2639]: E0213 20:02:13.152449 2639 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:13.152516 kubelet[2639]: E0213 20:02:13.152481 2639 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869"} Feb 13 20:02:13.152516 kubelet[2639]: E0213 20:02:13.152509 2639 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1fd0a0c8-d966-43b0-a129-ee14e139b86b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:02:13.152572 kubelet[2639]: E0213 20:02:13.152527 2639 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1fd0a0c8-d966-43b0-a129-ee14e139b86b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f5ffc687c-lbc64" podUID="1fd0a0c8-d966-43b0-a129-ee14e139b86b" Feb 13 20:02:14.071408 kubelet[2639]: I0213 20:02:14.071328 2639 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:02:14.071994 kubelet[2639]: E0213 20:02:14.071969 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:14.122314 kubelet[2639]: E0213 20:02:14.122275 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:16.449199 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:36158.service - OpenSSH per-connection server daemon (10.0.0.1:36158). Feb 13 20:02:17.479983 sshd[3859]: Accepted publickey for core from 10.0.0.1 port 36158 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:17.481160 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:17.485795 systemd-logind[1454]: New session 13 of user core. Feb 13 20:02:17.493285 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:02:17.779303 sshd[3859]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:17.789892 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:36158.service: Deactivated successfully. Feb 13 20:02:17.791751 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:02:17.793535 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:02:17.798437 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:36174.service - OpenSSH per-connection server daemon (10.0.0.1:36174). Feb 13 20:02:17.799317 systemd-logind[1454]: Removed session 13. Feb 13 20:02:17.833777 sshd[3879]: Accepted publickey for core from 10.0.0.1 port 36174 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:17.835354 sshd[3879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:17.840287 systemd-logind[1454]: New session 14 of user core. Feb 13 20:02:17.848274 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:02:17.996889 sshd[3879]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:18.011284 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:36174.service: Deactivated successfully. Feb 13 20:02:18.013870 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:02:18.015489 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:02:18.032448 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:36180.service - OpenSSH per-connection server daemon (10.0.0.1:36180). Feb 13 20:02:18.034136 systemd-logind[1454]: Removed session 14. Feb 13 20:02:18.067659 sshd[3891]: Accepted publickey for core from 10.0.0.1 port 36180 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:18.068962 sshd[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:18.078435 systemd-logind[1454]: New session 15 of user core. Feb 13 20:02:18.084329 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:02:18.216754 sshd[3891]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:18.220198 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:02:18.220898 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:36180.service: Deactivated successfully. Feb 13 20:02:18.224707 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:02:18.226817 systemd-logind[1454]: Removed session 15. Feb 13 20:02:19.756825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184347847.mount: Deactivated successfully. Feb 13 20:02:21.331235 containerd[1466]: time="2025-02-13T20:02:21.331159350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:21.332146 containerd[1466]: time="2025-02-13T20:02:21.332088353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:02:21.333838 containerd[1466]: time="2025-02-13T20:02:21.333804133Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:21.336602 containerd[1466]: time="2025-02-13T20:02:21.336553448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:21.337234 containerd[1466]: time="2025-02-13T20:02:21.337186478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.226256912s" Feb 13 20:02:21.337297 containerd[1466]: time="2025-02-13T20:02:21.337235553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:02:21.346016 containerd[1466]: time="2025-02-13T20:02:21.345881884Z" level=info msg="CreateContainer within sandbox \"04447f200bf5261fa831e8ba2aadbd40249d92c7558b87dc8c57d0b418989c0b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:02:21.371925 containerd[1466]: time="2025-02-13T20:02:21.371861187Z" level=info msg="CreateContainer within sandbox \"04447f200bf5261fa831e8ba2aadbd40249d92c7558b87dc8c57d0b418989c0b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8360b8a09b5f663c09d230bc555cfda8994905dc9db7e40d37f80f415e4f753f\"" Feb 13 20:02:21.372554 containerd[1466]: time="2025-02-13T20:02:21.372515960Z" level=info msg="StartContainer for \"8360b8a09b5f663c09d230bc555cfda8994905dc9db7e40d37f80f415e4f753f\"" Feb 13 20:02:21.428345 systemd[1]: Started cri-containerd-8360b8a09b5f663c09d230bc555cfda8994905dc9db7e40d37f80f415e4f753f.scope - libcontainer container 8360b8a09b5f663c09d230bc555cfda8994905dc9db7e40d37f80f415e4f753f. Feb 13 20:02:21.694247 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:02:21.763374 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:02:21.766118 containerd[1466]: time="2025-02-13T20:02:21.766039478Z" level=info msg="StartContainer for \"8360b8a09b5f663c09d230bc555cfda8994905dc9db7e40d37f80f415e4f753f\" returns successfully" Feb 13 20:02:22.299765 kubelet[2639]: E0213 20:02:22.299731 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:22.475305 kubelet[2639]: I0213 20:02:22.475156 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-prrfp" podStartSLOduration=2.535832914 podStartE2EDuration="29.475139581s" podCreationTimestamp="2025-02-13 20:01:53 +0000 UTC" firstStartedPulling="2025-02-13 20:01:54.398659997 +0000 UTC m=+21.457396908" lastFinishedPulling="2025-02-13 20:02:21.337966654 +0000 UTC m=+48.396703575" observedRunningTime="2025-02-13 20:02:22.47500394 +0000 UTC m=+49.533740841" watchObservedRunningTime="2025-02-13 20:02:22.475139581 +0000 UTC m=+49.533876493" Feb 13 20:02:23.023915 containerd[1466]: time="2025-02-13T20:02:23.023840863Z" level=info msg="StopPodSandbox for \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\"" Feb 13 20:02:23.227527 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:34248.service - OpenSSH per-connection server daemon (10.0.0.1:34248). Feb 13 20:02:23.288239 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 34248 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:23.289891 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:23.293786 systemd-logind[1454]: New session 16 of user core. Feb 13 20:02:23.301245 kubelet[2639]: E0213 20:02:23.301208 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:23.303356 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:02:23.486690 sshd[4022]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:23.490716 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:02:23.491587 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:34248.service: Deactivated successfully. Feb 13 20:02:23.495247 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:02:23.497903 systemd-logind[1454]: Removed session 16. Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.453 [INFO][4014] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.453 [INFO][4014] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" iface="eth0" netns="/var/run/netns/cni-ad98385c-d86d-9ec9-7555-509d852a4fb3" Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.454 [INFO][4014] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" iface="eth0" netns="/var/run/netns/cni-ad98385c-d86d-9ec9-7555-509d852a4fb3" Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.455 [INFO][4014] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" iface="eth0" netns="/var/run/netns/cni-ad98385c-d86d-9ec9-7555-509d852a4fb3" Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.455 [INFO][4014] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.455 [INFO][4014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.522 [INFO][4153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" HandleID="k8s-pod-network.1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.523 [INFO][4153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.523 [INFO][4153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.669 [WARNING][4153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" HandleID="k8s-pod-network.1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.669 [INFO][4153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" HandleID="k8s-pod-network.1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.671 [INFO][4153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:23.677448 containerd[1466]: 2025-02-13 20:02:23.673 [INFO][4014] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:23.680529 systemd[1]: run-netns-cni\x2dad98385c\x2dd86d\x2d9ec9\x2d7555\x2d509d852a4fb3.mount: Deactivated successfully. Feb 13 20:02:23.681454 containerd[1466]: time="2025-02-13T20:02:23.681300125Z" level=info msg="TearDown network for sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\" successfully" Feb 13 20:02:23.681454 containerd[1466]: time="2025-02-13T20:02:23.681329873Z" level=info msg="StopPodSandbox for \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\" returns successfully" Feb 13 20:02:23.681773 kubelet[2639]: E0213 20:02:23.681687 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:23.682444 containerd[1466]: time="2025-02-13T20:02:23.682407328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x9rtf,Uid:9053966c-63f0-49bc-a5f7-8526a79f7772,Namespace:kube-system,Attempt:1,}" Feb 13 20:02:23.733126 kernel: bpftool[4194]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:02:24.000753 systemd-networkd[1376]: vxlan.calico: Link UP Feb 13 20:02:24.000766 systemd-networkd[1376]: vxlan.calico: Gained carrier Feb 13 20:02:24.023819 containerd[1466]: time="2025-02-13T20:02:24.023441005Z" level=info msg="StopPodSandbox for \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\"" Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.203 [INFO][4245] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.203 [INFO][4245] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" iface="eth0" netns="/var/run/netns/cni-0d9282aa-0816-f477-29bd-f5c023bdd90b" Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.204 [INFO][4245] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" iface="eth0" netns="/var/run/netns/cni-0d9282aa-0816-f477-29bd-f5c023bdd90b" Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.204 [INFO][4245] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" iface="eth0" netns="/var/run/netns/cni-0d9282aa-0816-f477-29bd-f5c023bdd90b" Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.204 [INFO][4245] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.204 [INFO][4245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.228 [INFO][4259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" HandleID="k8s-pod-network.e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.228 [INFO][4259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.228 [INFO][4259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.232 [WARNING][4259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" HandleID="k8s-pod-network.e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.232 [INFO][4259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" HandleID="k8s-pod-network.e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.234 [INFO][4259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:24.240693 containerd[1466]: 2025-02-13 20:02:24.237 [INFO][4245] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:24.241609 containerd[1466]: time="2025-02-13T20:02:24.240840143Z" level=info msg="TearDown network for sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\" successfully" Feb 13 20:02:24.241609 containerd[1466]: time="2025-02-13T20:02:24.240865082Z" level=info msg="StopPodSandbox for \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\" returns successfully" Feb 13 20:02:24.241659 kubelet[2639]: E0213 20:02:24.241191 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:24.243616 containerd[1466]: time="2025-02-13T20:02:24.243510514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnbb8,Uid:3184b51f-d611-4fe4-a9e8-2390a1c90a5a,Namespace:kube-system,Attempt:1,}" Feb 13 20:02:24.243859 systemd[1]: run-netns-cni\x2d0d9282aa\x2d0816\x2df477\x2d29bd\x2df5c023bdd90b.mount: Deactivated successfully. Feb 13 20:02:25.048443 systemd-networkd[1376]: cali6f4ccf31462: Link UP Feb 13 20:02:25.049228 systemd-networkd[1376]: cali6f4ccf31462: Gained carrier Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.781 [INFO][4301] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0 coredns-7db6d8ff4d- kube-system 9053966c-63f0-49bc-a5f7-8526a79f7772 944 0 2025-02-13 20:01:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-x9rtf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f4ccf31462 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x9rtf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x9rtf-" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.781 [INFO][4301] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x9rtf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.877 [INFO][4314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" HandleID="k8s-pod-network.5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.886 [INFO][4314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" HandleID="k8s-pod-network.5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000503f30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-x9rtf", "timestamp":"2025-02-13 20:02:24.877919285 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.886 [INFO][4314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.886 [INFO][4314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.886 [INFO][4314] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.888 [INFO][4314] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" host="localhost" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.892 [INFO][4314] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.895 [INFO][4314] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.896 [INFO][4314] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.898 [INFO][4314] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.898 [INFO][4314] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" host="localhost" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.899 [INFO][4314] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:24.918 [INFO][4314] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" host="localhost" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:25.032 [INFO][4314] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" host="localhost" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:25.033 [INFO][4314] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" host="localhost" Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:25.033 [INFO][4314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:25.093374 containerd[1466]: 2025-02-13 20:02:25.033 [INFO][4314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" HandleID="k8s-pod-network.5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:25.094132 containerd[1466]: 2025-02-13 20:02:25.035 [INFO][4301] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x9rtf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9053966c-63f0-49bc-a5f7-8526a79f7772", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-x9rtf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f4ccf31462", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:25.094132 containerd[1466]: 2025-02-13 20:02:25.035 [INFO][4301] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x9rtf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:25.094132 containerd[1466]: 2025-02-13 20:02:25.035 [INFO][4301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f4ccf31462 ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x9rtf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:25.094132 containerd[1466]: 2025-02-13 20:02:25.049 [INFO][4301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x9rtf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:25.094132 containerd[1466]: 2025-02-13 20:02:25.049 [INFO][4301] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x9rtf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9053966c-63f0-49bc-a5f7-8526a79f7772", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e", Pod:"coredns-7db6d8ff4d-x9rtf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f4ccf31462", MAC:"b6:fe:0a:d3:23:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:25.094132 containerd[1466]: 2025-02-13 20:02:25.089 [INFO][4301] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x9rtf" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:25.155228 containerd[1466]: time="2025-02-13T20:02:25.155136207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:02:25.155938 containerd[1466]: time="2025-02-13T20:02:25.155721132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:02:25.155938 containerd[1466]: time="2025-02-13T20:02:25.155746290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:25.156219 containerd[1466]: time="2025-02-13T20:02:25.156172110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:25.177769 systemd[1]: run-containerd-runc-k8s.io-5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e-runc.ElzdMF.mount: Deactivated successfully. Feb 13 20:02:25.188216 systemd[1]: Started cri-containerd-5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e.scope - libcontainer container 5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e. Feb 13 20:02:25.199740 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:02:25.228391 containerd[1466]: time="2025-02-13T20:02:25.228343765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x9rtf,Uid:9053966c-63f0-49bc-a5f7-8526a79f7772,Namespace:kube-system,Attempt:1,} returns sandbox id \"5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e\"" Feb 13 20:02:25.229206 kubelet[2639]: E0213 20:02:25.229182 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:25.231692 containerd[1466]: time="2025-02-13T20:02:25.231664844Z" level=info msg="CreateContainer within sandbox \"5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:02:25.281825 containerd[1466]: time="2025-02-13T20:02:25.281782935Z" level=info msg="CreateContainer within sandbox \"5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22e53bf7ca6542145c0d2e334430424971542fd1804b20069fbb18f8680e65a7\"" Feb 13 20:02:25.282432 containerd[1466]: time="2025-02-13T20:02:25.282401585Z" level=info msg="StartContainer for \"22e53bf7ca6542145c0d2e334430424971542fd1804b20069fbb18f8680e65a7\"" Feb 13 20:02:25.310222 systemd[1]: Started cri-containerd-22e53bf7ca6542145c0d2e334430424971542fd1804b20069fbb18f8680e65a7.scope - libcontainer container 22e53bf7ca6542145c0d2e334430424971542fd1804b20069fbb18f8680e65a7. Feb 13 20:02:25.333413 systemd-networkd[1376]: cali3e4557ba720: Link UP Feb 13 20:02:25.334153 systemd-networkd[1376]: cali3e4557ba720: Gained carrier Feb 13 20:02:25.424282 containerd[1466]: time="2025-02-13T20:02:25.424233003Z" level=info msg="StartContainer for \"22e53bf7ca6542145c0d2e334430424971542fd1804b20069fbb18f8680e65a7\" returns successfully" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.210 [INFO][4363] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0 coredns-7db6d8ff4d- kube-system 3184b51f-d611-4fe4-a9e8-2390a1c90a5a 951 0 2025-02-13 20:01:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-pnbb8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3e4557ba720 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnbb8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnbb8-" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.210 [INFO][4363] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnbb8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.237 [INFO][4384] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" HandleID="k8s-pod-network.b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.263 [INFO][4384] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" HandleID="k8s-pod-network.b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000308c60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-pnbb8", "timestamp":"2025-02-13 20:02:25.237232804 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.263 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.264 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.264 [INFO][4384] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.265 [INFO][4384] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" host="localhost" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.269 [INFO][4384] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.272 [INFO][4384] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.274 [INFO][4384] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.276 [INFO][4384] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.276 [INFO][4384] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" host="localhost" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.277 [INFO][4384] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.281 [INFO][4384] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" host="localhost" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.326 [INFO][4384] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" host="localhost" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.326 [INFO][4384] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" host="localhost" Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.326 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:25.441899 containerd[1466]: 2025-02-13 20:02:25.326 [INFO][4384] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" HandleID="k8s-pod-network.b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:25.444371 containerd[1466]: 2025-02-13 20:02:25.330 [INFO][4363] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnbb8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3184b51f-d611-4fe4-a9e8-2390a1c90a5a", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-pnbb8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e4557ba720", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:25.444371 containerd[1466]: 2025-02-13 20:02:25.330 [INFO][4363] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnbb8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:25.444371 containerd[1466]: 2025-02-13 20:02:25.331 [INFO][4363] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e4557ba720 ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnbb8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:25.444371 containerd[1466]: 2025-02-13 20:02:25.335 [INFO][4363] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnbb8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:25.444371 containerd[1466]: 2025-02-13 20:02:25.335 [INFO][4363] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnbb8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3184b51f-d611-4fe4-a9e8-2390a1c90a5a", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca", Pod:"coredns-7db6d8ff4d-pnbb8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e4557ba720", MAC:"1a:6b:c9:73:70:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:25.444371 containerd[1466]: 2025-02-13 20:02:25.438 [INFO][4363] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pnbb8" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:25.510382 containerd[1466]: time="2025-02-13T20:02:25.510261635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:02:25.510382 containerd[1466]: time="2025-02-13T20:02:25.510340257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:02:25.510382 containerd[1466]: time="2025-02-13T20:02:25.510386004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:25.510599 containerd[1466]: time="2025-02-13T20:02:25.510566342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:25.530230 systemd[1]: Started cri-containerd-b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca.scope - libcontainer container b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca. Feb 13 20:02:25.543475 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:02:25.570613 containerd[1466]: time="2025-02-13T20:02:25.569636975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pnbb8,Uid:3184b51f-d611-4fe4-a9e8-2390a1c90a5a,Namespace:kube-system,Attempt:1,} returns sandbox id \"b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca\"" Feb 13 20:02:25.571672 kubelet[2639]: E0213 20:02:25.571415 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:25.573676 containerd[1466]: time="2025-02-13T20:02:25.573626349Z" level=info msg="CreateContainer within sandbox \"b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:02:25.861289 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Feb 13 20:02:26.023559 containerd[1466]: time="2025-02-13T20:02:26.023491647Z" level=info msg="StopPodSandbox for \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\"" Feb 13 20:02:26.177261 containerd[1466]: time="2025-02-13T20:02:26.176960763Z" level=info msg="CreateContainer within sandbox \"b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a342f0d338cd2fa061bcbade136a5c2d760e82b157148c193cca676ada61b9b\"" Feb 13 20:02:26.178156 containerd[1466]: time="2025-02-13T20:02:26.177557571Z" level=info msg="StartContainer for \"9a342f0d338cd2fa061bcbade136a5c2d760e82b157148c193cca676ada61b9b\"" Feb 13 20:02:26.213668 systemd[1]: Started cri-containerd-9a342f0d338cd2fa061bcbade136a5c2d760e82b157148c193cca676ada61b9b.scope - libcontainer container 9a342f0d338cd2fa061bcbade136a5c2d760e82b157148c193cca676ada61b9b. Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.180 [INFO][4509] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.180 [INFO][4509] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" iface="eth0" netns="/var/run/netns/cni-ae25c6a4-f930-6417-c488-eefdf3b048ca" Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.180 [INFO][4509] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" iface="eth0" netns="/var/run/netns/cni-ae25c6a4-f930-6417-c488-eefdf3b048ca" Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.180 [INFO][4509] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" iface="eth0" netns="/var/run/netns/cni-ae25c6a4-f930-6417-c488-eefdf3b048ca" Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.180 [INFO][4509] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.180 [INFO][4509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.206 [INFO][4517] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" HandleID="k8s-pod-network.70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.206 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.206 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.212 [WARNING][4517] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" HandleID="k8s-pod-network.70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.212 [INFO][4517] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" HandleID="k8s-pod-network.70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.213 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:26.220172 containerd[1466]: 2025-02-13 20:02:26.216 [INFO][4509] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:26.223216 systemd[1]: run-netns-cni\x2dae25c6a4\x2df930\x2d6417\x2dc488\x2deefdf3b048ca.mount: Deactivated successfully. Feb 13 20:02:26.223325 containerd[1466]: time="2025-02-13T20:02:26.223207698Z" level=info msg="TearDown network for sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\" successfully" Feb 13 20:02:26.223325 containerd[1466]: time="2025-02-13T20:02:26.223236694Z" level=info msg="StopPodSandbox for \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\" returns successfully" Feb 13 20:02:26.224077 containerd[1466]: time="2025-02-13T20:02:26.224042673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7866456b95-h7zp8,Uid:49be02d3-1172-42eb-afb4-696e59a6f97d,Namespace:calico-system,Attempt:1,}" Feb 13 20:02:26.282189 containerd[1466]: time="2025-02-13T20:02:26.281630168Z" level=info msg="StartContainer for \"9a342f0d338cd2fa061bcbade136a5c2d760e82b157148c193cca676ada61b9b\" returns successfully" Feb 13 20:02:26.317985 kubelet[2639]: E0213 20:02:26.317652 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:26.319509 kubelet[2639]: E0213 20:02:26.319140 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:26.331209 kubelet[2639]: I0213 20:02:26.330856 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x9rtf" podStartSLOduration=41.330839076 podStartE2EDuration="41.330839076s" podCreationTimestamp="2025-02-13 20:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:02:26.330402959 +0000 UTC m=+53.389139890" watchObservedRunningTime="2025-02-13 20:02:26.330839076 +0000 UTC m=+53.389575977" Feb 13 20:02:26.355526 kubelet[2639]: I0213 20:02:26.354705 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pnbb8" podStartSLOduration=41.354682222 podStartE2EDuration="41.354682222s" podCreationTimestamp="2025-02-13 20:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:02:26.34443862 +0000 UTC m=+53.403175541" watchObservedRunningTime="2025-02-13 20:02:26.354682222 +0000 UTC m=+53.413419133" Feb 13 20:02:26.422712 systemd-networkd[1376]: cali85b986f2579: Link UP Feb 13 20:02:26.423960 systemd-networkd[1376]: cali85b986f2579: Gained carrier Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.337 [INFO][4567] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0 calico-kube-controllers-7866456b95- calico-system 49be02d3-1172-42eb-afb4-696e59a6f97d 970 0 2025-02-13 20:01:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7866456b95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7866456b95-h7zp8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali85b986f2579 [] []}} ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Namespace="calico-system" Pod="calico-kube-controllers-7866456b95-h7zp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.337 [INFO][4567] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Namespace="calico-system" Pod="calico-kube-controllers-7866456b95-h7zp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.386 [INFO][4582] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" HandleID="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.393 [INFO][4582] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" HandleID="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030ead0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7866456b95-h7zp8", "timestamp":"2025-02-13 20:02:26.386029834 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.393 [INFO][4582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.393 [INFO][4582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.394 [INFO][4582] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.395 [INFO][4582] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" host="localhost" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.398 [INFO][4582] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.402 [INFO][4582] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.404 [INFO][4582] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.405 [INFO][4582] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.406 [INFO][4582] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" host="localhost" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.407 [INFO][4582] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.410 [INFO][4582] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" host="localhost" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.417 [INFO][4582] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" host="localhost" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.417 [INFO][4582] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" host="localhost" Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.417 [INFO][4582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:26.435814 containerd[1466]: 2025-02-13 20:02:26.417 [INFO][4582] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" HandleID="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:26.436483 containerd[1466]: 2025-02-13 20:02:26.420 [INFO][4567] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Namespace="calico-system" Pod="calico-kube-controllers-7866456b95-h7zp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0", GenerateName:"calico-kube-controllers-7866456b95-", Namespace:"calico-system", SelfLink:"", UID:"49be02d3-1172-42eb-afb4-696e59a6f97d", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7866456b95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7866456b95-h7zp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85b986f2579", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:26.436483 containerd[1466]: 2025-02-13 20:02:26.420 [INFO][4567] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Namespace="calico-system" Pod="calico-kube-controllers-7866456b95-h7zp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:26.436483 containerd[1466]: 2025-02-13 20:02:26.420 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85b986f2579 ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Namespace="calico-system" Pod="calico-kube-controllers-7866456b95-h7zp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:26.436483 containerd[1466]: 2025-02-13 20:02:26.422 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Namespace="calico-system" Pod="calico-kube-controllers-7866456b95-h7zp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:26.436483 containerd[1466]: 2025-02-13 20:02:26.423 [INFO][4567] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Namespace="calico-system" Pod="calico-kube-controllers-7866456b95-h7zp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0", GenerateName:"calico-kube-controllers-7866456b95-", Namespace:"calico-system", SelfLink:"", UID:"49be02d3-1172-42eb-afb4-696e59a6f97d", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7866456b95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e", Pod:"calico-kube-controllers-7866456b95-h7zp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85b986f2579", MAC:"6e:50:5c:93:3c:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:26.436483 containerd[1466]: 2025-02-13 20:02:26.432 [INFO][4567] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Namespace="calico-system" Pod="calico-kube-controllers-7866456b95-h7zp8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:26.459007 containerd[1466]: time="2025-02-13T20:02:26.458203230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:02:26.459007 containerd[1466]: time="2025-02-13T20:02:26.458896071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:02:26.459007 containerd[1466]: time="2025-02-13T20:02:26.458909788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:26.459365 containerd[1466]: time="2025-02-13T20:02:26.459159087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:26.484300 systemd[1]: Started cri-containerd-a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e.scope - libcontainer container a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e. Feb 13 20:02:26.504536 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:02:26.531638 containerd[1466]: time="2025-02-13T20:02:26.531587051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7866456b95-h7zp8,Uid:49be02d3-1172-42eb-afb4-696e59a6f97d,Namespace:calico-system,Attempt:1,} returns sandbox id \"a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e\"" Feb 13 20:02:26.533257 containerd[1466]: time="2025-02-13T20:02:26.533231963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:02:27.013288 systemd-networkd[1376]: cali6f4ccf31462: Gained IPv6LL Feb 13 20:02:27.023228 containerd[1466]: time="2025-02-13T20:02:27.023137633Z" level=info msg="StopPodSandbox for \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\"" Feb 13 20:02:27.078248 systemd-networkd[1376]: cali3e4557ba720: Gained IPv6LL Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.063 [INFO][4667] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.064 [INFO][4667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" iface="eth0" netns="/var/run/netns/cni-b15e9c68-973d-4126-f478-fe5f8561e24b" Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.064 [INFO][4667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" iface="eth0" netns="/var/run/netns/cni-b15e9c68-973d-4126-f478-fe5f8561e24b" Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.064 [INFO][4667] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" iface="eth0" netns="/var/run/netns/cni-b15e9c68-973d-4126-f478-fe5f8561e24b" Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.064 [INFO][4667] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.064 [INFO][4667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.084 [INFO][4674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" HandleID="k8s-pod-network.59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.084 [INFO][4674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.084 [INFO][4674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.089 [WARNING][4674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" HandleID="k8s-pod-network.59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.089 [INFO][4674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" HandleID="k8s-pod-network.59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.090 [INFO][4674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:27.096771 containerd[1466]: 2025-02-13 20:02:27.093 [INFO][4667] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:27.097231 containerd[1466]: time="2025-02-13T20:02:27.096939617Z" level=info msg="TearDown network for sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\" successfully" Feb 13 20:02:27.097231 containerd[1466]: time="2025-02-13T20:02:27.096984733Z" level=info msg="StopPodSandbox for \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\" returns successfully" Feb 13 20:02:27.097722 containerd[1466]: time="2025-02-13T20:02:27.097679518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5ffc687c-5bd4w,Uid:ae5d8c7e-8ef1-493b-98f0-c5400cc3d726,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:02:27.126369 systemd[1]: run-netns-cni\x2db15e9c68\x2d973d\x2d4126\x2df478\x2dfe5f8561e24b.mount: Deactivated successfully. Feb 13 20:02:27.207522 systemd-networkd[1376]: calie23c97173c7: Link UP Feb 13 20:02:27.207756 systemd-networkd[1376]: calie23c97173c7: Gained carrier Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.141 [INFO][4682] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0 calico-apiserver-7f5ffc687c- calico-apiserver ae5d8c7e-8ef1-493b-98f0-c5400cc3d726 1008 0 2025-02-13 20:01:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f5ffc687c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f5ffc687c-5bd4w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie23c97173c7 [] []}} ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-5bd4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.141 [INFO][4682] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-5bd4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.170 [INFO][4697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" HandleID="k8s-pod-network.891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.177 [INFO][4697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" HandleID="k8s-pod-network.891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ff9b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f5ffc687c-5bd4w", "timestamp":"2025-02-13 20:02:27.170418649 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.177 [INFO][4697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.177 [INFO][4697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.178 [INFO][4697] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.179 [INFO][4697] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" host="localhost" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.182 [INFO][4697] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.186 [INFO][4697] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.188 [INFO][4697] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.190 [INFO][4697] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.190 [INFO][4697] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" host="localhost" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.191 [INFO][4697] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52 Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.195 [INFO][4697] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" host="localhost" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.201 [INFO][4697] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" host="localhost" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.201 [INFO][4697] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" host="localhost" Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.201 [INFO][4697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:27.226134 containerd[1466]: 2025-02-13 20:02:27.201 [INFO][4697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" HandleID="k8s-pod-network.891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:27.226876 containerd[1466]: 2025-02-13 20:02:27.204 [INFO][4682] cni-plugin/k8s.go 386: Populated endpoint ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-5bd4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0", GenerateName:"calico-apiserver-7f5ffc687c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae5d8c7e-8ef1-493b-98f0-c5400cc3d726", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5ffc687c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f5ffc687c-5bd4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie23c97173c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:27.226876 containerd[1466]: 2025-02-13 20:02:27.205 [INFO][4682] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-5bd4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:27.226876 containerd[1466]: 2025-02-13 20:02:27.205 [INFO][4682] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie23c97173c7 ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-5bd4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:27.226876 containerd[1466]: 2025-02-13 20:02:27.207 [INFO][4682] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-5bd4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:27.226876 containerd[1466]: 2025-02-13 20:02:27.208 [INFO][4682] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-5bd4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0", GenerateName:"calico-apiserver-7f5ffc687c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae5d8c7e-8ef1-493b-98f0-c5400cc3d726", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5ffc687c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52", Pod:"calico-apiserver-7f5ffc687c-5bd4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie23c97173c7", MAC:"ce:65:86:63:65:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:27.226876 containerd[1466]: 2025-02-13 20:02:27.218 [INFO][4682] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-5bd4w" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:27.263739 containerd[1466]: time="2025-02-13T20:02:27.263577599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:02:27.263739 containerd[1466]: time="2025-02-13T20:02:27.263625030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:02:27.263739 containerd[1466]: time="2025-02-13T20:02:27.263639658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:27.263916 containerd[1466]: time="2025-02-13T20:02:27.263712318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:27.290234 systemd[1]: Started cri-containerd-891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52.scope - libcontainer container 891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52. Feb 13 20:02:27.301586 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:02:27.326653 containerd[1466]: time="2025-02-13T20:02:27.325448401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5ffc687c-5bd4w,Uid:ae5d8c7e-8ef1-493b-98f0-c5400cc3d726,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52\"" Feb 13 20:02:27.342311 kubelet[2639]: E0213 20:02:27.342279 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:27.343326 kubelet[2639]: E0213 20:02:27.342789 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:27.653256 systemd-networkd[1376]: cali85b986f2579: Gained IPv6LL Feb 13 20:02:28.023905 containerd[1466]: time="2025-02-13T20:02:28.023825454Z" level=info msg="StopPodSandbox for \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\"" Feb 13 20:02:28.024081 containerd[1466]: time="2025-02-13T20:02:28.023865030Z" level=info msg="StopPodSandbox for \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\"" Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.114 [INFO][4793] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.114 [INFO][4793] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" iface="eth0" netns="/var/run/netns/cni-3b0e8235-7820-862a-3065-875ceb8a087f" Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.114 [INFO][4793] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" iface="eth0" netns="/var/run/netns/cni-3b0e8235-7820-862a-3065-875ceb8a087f" Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.115 [INFO][4793] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" iface="eth0" netns="/var/run/netns/cni-3b0e8235-7820-862a-3065-875ceb8a087f" Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.115 [INFO][4793] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.115 [INFO][4793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.134 [INFO][4808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" HandleID="k8s-pod-network.0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.134 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.134 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.241 [WARNING][4808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" HandleID="k8s-pod-network.0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.241 [INFO][4808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" HandleID="k8s-pod-network.0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.243 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:28.247697 containerd[1466]: 2025-02-13 20:02:28.245 [INFO][4793] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:28.250330 containerd[1466]: time="2025-02-13T20:02:28.250269826Z" level=info msg="TearDown network for sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\" successfully" Feb 13 20:02:28.250330 containerd[1466]: time="2025-02-13T20:02:28.250312338Z" level=info msg="StopPodSandbox for \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\" returns successfully" Feb 13 20:02:28.251020 containerd[1466]: time="2025-02-13T20:02:28.250976764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfdrh,Uid:2516eb1f-4a76-4950-92c4-3225425d63a6,Namespace:calico-system,Attempt:1,}" Feb 13 20:02:28.251042 systemd[1]: run-netns-cni\x2d3b0e8235\x2d7820\x2d862a\x2d3065\x2d875ceb8a087f.mount: Deactivated successfully. Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.242 [INFO][4792] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.242 [INFO][4792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" iface="eth0" netns="/var/run/netns/cni-0e45b6b5-d4f6-0641-df6d-43d9ada67bce" Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.242 [INFO][4792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" iface="eth0" netns="/var/run/netns/cni-0e45b6b5-d4f6-0641-df6d-43d9ada67bce" Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.242 [INFO][4792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" iface="eth0" netns="/var/run/netns/cni-0e45b6b5-d4f6-0641-df6d-43d9ada67bce" Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.243 [INFO][4792] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.243 [INFO][4792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.264 [INFO][4816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" HandleID="k8s-pod-network.cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.264 [INFO][4816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.264 [INFO][4816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.268 [WARNING][4816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" HandleID="k8s-pod-network.cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.268 [INFO][4816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" HandleID="k8s-pod-network.cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.270 [INFO][4816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:28.275157 containerd[1466]: 2025-02-13 20:02:28.272 [INFO][4792] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:28.275157 containerd[1466]: time="2025-02-13T20:02:28.275149092Z" level=info msg="TearDown network for sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\" successfully" Feb 13 20:02:28.275577 containerd[1466]: time="2025-02-13T20:02:28.275170303Z" level=info msg="StopPodSandbox for \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\" returns successfully" Feb 13 20:02:28.276642 containerd[1466]: time="2025-02-13T20:02:28.276582534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5ffc687c-lbc64,Uid:1fd0a0c8-d966-43b0-a129-ee14e139b86b,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:02:28.278023 systemd[1]: run-netns-cni\x2d0e45b6b5\x2dd4f6\x2d0641\x2ddf6d\x2d43d9ada67bce.mount: Deactivated successfully. Feb 13 20:02:28.345157 kubelet[2639]: E0213 20:02:28.345126 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:28.357238 systemd-networkd[1376]: calie23c97173c7: Gained IPv6LL Feb 13 20:02:28.503945 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:34262.service - OpenSSH per-connection server daemon (10.0.0.1:34262). Feb 13 20:02:28.543063 sshd[4851]: Accepted publickey for core from 10.0.0.1 port 34262 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:28.546088 sshd[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:28.550652 systemd-logind[1454]: New session 17 of user core. Feb 13 20:02:28.558424 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:02:28.577725 systemd-networkd[1376]: calia38fc8013d8: Link UP Feb 13 20:02:28.578483 systemd-networkd[1376]: calia38fc8013d8: Gained carrier Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.505 [INFO][4824] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gfdrh-eth0 csi-node-driver- calico-system 2516eb1f-4a76-4950-92c4-3225425d63a6 1021 0 2025-02-13 20:01:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gfdrh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia38fc8013d8 [] []}} ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Namespace="calico-system" Pod="csi-node-driver-gfdrh" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfdrh-" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.505 [INFO][4824] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Namespace="calico-system" Pod="csi-node-driver-gfdrh" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.537 [INFO][4853] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" HandleID="k8s-pod-network.91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.546 [INFO][4853] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" HandleID="k8s-pod-network.91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309f30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gfdrh", "timestamp":"2025-02-13 20:02:28.5374314 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.546 [INFO][4853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.546 [INFO][4853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.546 [INFO][4853] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.548 [INFO][4853] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" host="localhost" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.551 [INFO][4853] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.555 [INFO][4853] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.559 [INFO][4853] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.561 [INFO][4853] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.561 [INFO][4853] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" host="localhost" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.563 [INFO][4853] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.566 [INFO][4853] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" host="localhost" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.571 [INFO][4853] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" host="localhost" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.571 [INFO][4853] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" host="localhost" Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.571 [INFO][4853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:28.590781 containerd[1466]: 2025-02-13 20:02:28.571 [INFO][4853] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" HandleID="k8s-pod-network.91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:28.591562 containerd[1466]: 2025-02-13 20:02:28.574 [INFO][4824] cni-plugin/k8s.go 386: Populated endpoint ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Namespace="calico-system" Pod="csi-node-driver-gfdrh" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfdrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gfdrh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2516eb1f-4a76-4950-92c4-3225425d63a6", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gfdrh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia38fc8013d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:28.591562 containerd[1466]: 2025-02-13 20:02:28.574 [INFO][4824] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Namespace="calico-system" Pod="csi-node-driver-gfdrh" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:28.591562 containerd[1466]: 2025-02-13 20:02:28.574 [INFO][4824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia38fc8013d8 ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Namespace="calico-system" Pod="csi-node-driver-gfdrh" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:28.591562 containerd[1466]: 2025-02-13 20:02:28.578 [INFO][4824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Namespace="calico-system" Pod="csi-node-driver-gfdrh" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:28.591562 containerd[1466]: 2025-02-13 20:02:28.578 [INFO][4824] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Namespace="calico-system" Pod="csi-node-driver-gfdrh" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfdrh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gfdrh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2516eb1f-4a76-4950-92c4-3225425d63a6", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd", Pod:"csi-node-driver-gfdrh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia38fc8013d8", MAC:"02:b7:22:02:f8:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:28.591562 containerd[1466]: 2025-02-13 20:02:28.588 [INFO][4824] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd" Namespace="calico-system" Pod="csi-node-driver-gfdrh" WorkloadEndpoint="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:28.610709 systemd-networkd[1376]: cali43ba2bdc287: Link UP Feb 13 20:02:28.612293 systemd-networkd[1376]: cali43ba2bdc287: Gained carrier Feb 13 20:02:28.614063 containerd[1466]: time="2025-02-13T20:02:28.613027748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:02:28.614200 containerd[1466]: time="2025-02-13T20:02:28.613708044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:02:28.614200 containerd[1466]: time="2025-02-13T20:02:28.613722712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:28.614200 containerd[1466]: time="2025-02-13T20:02:28.614173798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.518 [INFO][4836] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0 calico-apiserver-7f5ffc687c- calico-apiserver 1fd0a0c8-d966-43b0-a129-ee14e139b86b 1022 0 2025-02-13 20:01:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f5ffc687c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f5ffc687c-lbc64 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali43ba2bdc287 [] []}} ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-lbc64" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.518 [INFO][4836] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-lbc64" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.555 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" HandleID="k8s-pod-network.de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.563 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" HandleID="k8s-pod-network.de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005b7a20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f5ffc687c-lbc64", "timestamp":"2025-02-13 20:02:28.555016792 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.563 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.571 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.571 [INFO][4860] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.573 [INFO][4860] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" host="localhost" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.578 [INFO][4860] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.583 [INFO][4860] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.585 [INFO][4860] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.588 [INFO][4860] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.588 [INFO][4860] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" host="localhost" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.590 [INFO][4860] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242 Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.595 [INFO][4860] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" host="localhost" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.600 [INFO][4860] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" host="localhost" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.601 [INFO][4860] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" host="localhost" Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.601 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:28.629675 containerd[1466]: 2025-02-13 20:02:28.601 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" HandleID="k8s-pod-network.de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:28.630493 containerd[1466]: 2025-02-13 20:02:28.605 [INFO][4836] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-lbc64" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0", GenerateName:"calico-apiserver-7f5ffc687c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fd0a0c8-d966-43b0-a129-ee14e139b86b", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5ffc687c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f5ffc687c-lbc64", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43ba2bdc287", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:28.630493 containerd[1466]: 2025-02-13 20:02:28.605 [INFO][4836] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-lbc64" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:28.630493 containerd[1466]: 2025-02-13 20:02:28.605 [INFO][4836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43ba2bdc287 ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-lbc64" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:28.630493 containerd[1466]: 2025-02-13 20:02:28.611 [INFO][4836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-lbc64" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:28.630493 containerd[1466]: 2025-02-13 20:02:28.611 [INFO][4836] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-lbc64" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0", GenerateName:"calico-apiserver-7f5ffc687c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fd0a0c8-d966-43b0-a129-ee14e139b86b", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5ffc687c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242", Pod:"calico-apiserver-7f5ffc687c-lbc64", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43ba2bdc287", MAC:"e2:57:e3:d4:eb:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:28.630493 containerd[1466]: 2025-02-13 20:02:28.626 [INFO][4836] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242" Namespace="calico-apiserver" Pod="calico-apiserver-7f5ffc687c-lbc64" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:28.643256 systemd[1]: Started cri-containerd-91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd.scope - libcontainer container 91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd. Feb 13 20:02:28.656654 containerd[1466]: time="2025-02-13T20:02:28.656566719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:02:28.657027 containerd[1466]: time="2025-02-13T20:02:28.656941739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:02:28.657077 containerd[1466]: time="2025-02-13T20:02:28.657050748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:28.658307 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:02:28.658710 containerd[1466]: time="2025-02-13T20:02:28.657959102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:28.676473 containerd[1466]: time="2025-02-13T20:02:28.676273715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gfdrh,Uid:2516eb1f-4a76-4950-92c4-3225425d63a6,Namespace:calico-system,Attempt:1,} returns sandbox id \"91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd\"" Feb 13 20:02:28.681285 systemd[1]: Started cri-containerd-de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242.scope - libcontainer container de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242. Feb 13 20:02:28.695546 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:02:28.712801 sshd[4851]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:28.718296 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:34262.service: Deactivated successfully. Feb 13 20:02:28.720745 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:02:28.721653 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:02:28.722513 containerd[1466]: time="2025-02-13T20:02:28.722451591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f5ffc687c-lbc64,Uid:1fd0a0c8-d966-43b0-a129-ee14e139b86b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242\"" Feb 13 20:02:28.723670 systemd-logind[1454]: Removed session 17. Feb 13 20:02:29.765285 systemd-networkd[1376]: calia38fc8013d8: Gained IPv6LL Feb 13 20:02:29.765644 systemd-networkd[1376]: cali43ba2bdc287: Gained IPv6LL Feb 13 20:02:30.426198 containerd[1466]: time="2025-02-13T20:02:30.426145277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:30.452744 containerd[1466]: time="2025-02-13T20:02:30.452662737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:02:30.484814 containerd[1466]: time="2025-02-13T20:02:30.484766337Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:30.507320 containerd[1466]: time="2025-02-13T20:02:30.507278978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:30.508044 containerd[1466]: time="2025-02-13T20:02:30.507985833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.974723222s" Feb 13 20:02:30.508044 containerd[1466]: time="2025-02-13T20:02:30.508026922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:02:30.509226 containerd[1466]: time="2025-02-13T20:02:30.508876100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:02:30.516843 containerd[1466]: time="2025-02-13T20:02:30.516801518Z" level=info msg="CreateContainer within sandbox \"a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:02:30.531708 containerd[1466]: time="2025-02-13T20:02:30.531653338Z" level=info msg="CreateContainer within sandbox \"a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f\"" Feb 13 20:02:30.532228 containerd[1466]: time="2025-02-13T20:02:30.532204504Z" level=info msg="StartContainer for \"b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f\"" Feb 13 20:02:30.560272 systemd[1]: Started cri-containerd-b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f.scope - libcontainer container b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f. Feb 13 20:02:30.606861 containerd[1466]: time="2025-02-13T20:02:30.606793164Z" level=info msg="StartContainer for \"b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f\" returns successfully" Feb 13 20:02:31.405785 kubelet[2639]: I0213 20:02:31.405695 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7866456b95-h7zp8" podStartSLOduration=33.429685055 podStartE2EDuration="37.405675429s" podCreationTimestamp="2025-02-13 20:01:54 +0000 UTC" firstStartedPulling="2025-02-13 20:02:26.532733305 +0000 UTC m=+53.591470216" lastFinishedPulling="2025-02-13 20:02:30.508723678 +0000 UTC m=+57.567460590" observedRunningTime="2025-02-13 20:02:31.366787722 +0000 UTC m=+58.425524643" watchObservedRunningTime="2025-02-13 20:02:31.405675429 +0000 UTC m=+58.464412340" Feb 13 20:02:33.012683 containerd[1466]: time="2025-02-13T20:02:33.012646021Z" level=info msg="StopPodSandbox for \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\"" Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.047 [WARNING][5076] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3184b51f-d611-4fe4-a9e8-2390a1c90a5a", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca", Pod:"coredns-7db6d8ff4d-pnbb8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e4557ba720", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.047 [INFO][5076] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.047 [INFO][5076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" iface="eth0" netns="" Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.047 [INFO][5076] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.047 [INFO][5076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.066 [INFO][5086] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" HandleID="k8s-pod-network.e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.066 [INFO][5086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.066 [INFO][5086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.071 [WARNING][5086] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" HandleID="k8s-pod-network.e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.071 [INFO][5086] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" HandleID="k8s-pod-network.e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.073 [INFO][5086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.078618 containerd[1466]: 2025-02-13 20:02:33.075 [INFO][5076] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:33.079063 containerd[1466]: time="2025-02-13T20:02:33.078643414Z" level=info msg="TearDown network for sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\" successfully" Feb 13 20:02:33.079063 containerd[1466]: time="2025-02-13T20:02:33.078678712Z" level=info msg="StopPodSandbox for \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\" returns successfully" Feb 13 20:02:33.086241 containerd[1466]: time="2025-02-13T20:02:33.086167408Z" level=info msg="RemovePodSandbox for \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\"" Feb 13 20:02:33.088654 containerd[1466]: time="2025-02-13T20:02:33.088598633Z" level=info msg="Forcibly stopping sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\"" Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.121 [WARNING][5108] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3184b51f-d611-4fe4-a9e8-2390a1c90a5a", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b32cb2936b9d21927e594f62c8b8a1f9836e235c62cfaacca83119834429bfca", Pod:"coredns-7db6d8ff4d-pnbb8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e4557ba720", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.122 [INFO][5108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.122 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" iface="eth0" netns="" Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.122 [INFO][5108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.122 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.147 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" HandleID="k8s-pod-network.e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.147 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.147 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.151 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" HandleID="k8s-pod-network.e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.151 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" HandleID="k8s-pod-network.e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Workload="localhost-k8s-coredns--7db6d8ff4d--pnbb8-eth0" Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.153 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.157403 containerd[1466]: 2025-02-13 20:02:33.155 [INFO][5108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf" Feb 13 20:02:33.157937 containerd[1466]: time="2025-02-13T20:02:33.157455539Z" level=info msg="TearDown network for sandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\" successfully" Feb 13 20:02:33.167067 containerd[1466]: time="2025-02-13T20:02:33.167002757Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:02:33.167226 containerd[1466]: time="2025-02-13T20:02:33.167111464Z" level=info msg="RemovePodSandbox \"e8b953241401235f719bedde60fb6596d6c72a2ca7273057c30b69987eae17bf\" returns successfully" Feb 13 20:02:33.167685 containerd[1466]: time="2025-02-13T20:02:33.167652089Z" level=info msg="StopPodSandbox for \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\"" Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.204 [WARNING][5140] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9053966c-63f0-49bc-a5f7-8526a79f7772", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e", Pod:"coredns-7db6d8ff4d-x9rtf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f4ccf31462", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.204 [INFO][5140] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.204 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" iface="eth0" netns="" Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.204 [INFO][5140] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.204 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.224 [INFO][5148] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" HandleID="k8s-pod-network.1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.224 [INFO][5148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.224 [INFO][5148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.230 [WARNING][5148] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" HandleID="k8s-pod-network.1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.230 [INFO][5148] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" HandleID="k8s-pod-network.1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.231 [INFO][5148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.236760 containerd[1466]: 2025-02-13 20:02:33.234 [INFO][5140] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:33.237266 containerd[1466]: time="2025-02-13T20:02:33.236797599Z" level=info msg="TearDown network for sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\" successfully" Feb 13 20:02:33.237266 containerd[1466]: time="2025-02-13T20:02:33.236823939Z" level=info msg="StopPodSandbox for \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\" returns successfully" Feb 13 20:02:33.237969 containerd[1466]: time="2025-02-13T20:02:33.237933494Z" level=info msg="RemovePodSandbox for \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\"" Feb 13 20:02:33.238022 containerd[1466]: time="2025-02-13T20:02:33.237981165Z" level=info msg="Forcibly stopping sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\"" Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.271 [WARNING][5170] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9053966c-63f0-49bc-a5f7-8526a79f7772", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5536b3c59f95d8d8ccc98530e718f8a8a4dbcc77007dc671b04ebc4de3cdb03e", Pod:"coredns-7db6d8ff4d-x9rtf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f4ccf31462", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.272 [INFO][5170] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.272 [INFO][5170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" iface="eth0" netns="" Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.272 [INFO][5170] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.272 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.290 [INFO][5177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" HandleID="k8s-pod-network.1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.290 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.291 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.296 [WARNING][5177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" HandleID="k8s-pod-network.1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.296 [INFO][5177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" HandleID="k8s-pod-network.1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Workload="localhost-k8s-coredns--7db6d8ff4d--x9rtf-eth0" Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.297 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.302564 containerd[1466]: 2025-02-13 20:02:33.300 [INFO][5170] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893" Feb 13 20:02:33.303047 containerd[1466]: time="2025-02-13T20:02:33.302603234Z" level=info msg="TearDown network for sandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\" successfully" Feb 13 20:02:33.307405 containerd[1466]: time="2025-02-13T20:02:33.307352747Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:02:33.307613 containerd[1466]: time="2025-02-13T20:02:33.307432330Z" level=info msg="RemovePodSandbox \"1acd5ea0d8e9ef33f3b362364a6e28a23fc4d29e40cb1bc5b473988a38e68893\" returns successfully" Feb 13 20:02:33.308063 containerd[1466]: time="2025-02-13T20:02:33.308022960Z" level=info msg="StopPodSandbox for \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\"" Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.347 [WARNING][5200] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0", GenerateName:"calico-apiserver-7f5ffc687c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae5d8c7e-8ef1-493b-98f0-c5400cc3d726", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5ffc687c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52", Pod:"calico-apiserver-7f5ffc687c-5bd4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie23c97173c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.347 [INFO][5200] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.347 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" iface="eth0" netns="" Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.347 [INFO][5200] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.347 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.367 [INFO][5207] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" HandleID="k8s-pod-network.59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.367 [INFO][5207] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.367 [INFO][5207] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.372 [WARNING][5207] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" HandleID="k8s-pod-network.59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.372 [INFO][5207] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" HandleID="k8s-pod-network.59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.373 [INFO][5207] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.377703 containerd[1466]: 2025-02-13 20:02:33.375 [INFO][5200] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:33.378123 containerd[1466]: time="2025-02-13T20:02:33.377742319Z" level=info msg="TearDown network for sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\" successfully" Feb 13 20:02:33.378123 containerd[1466]: time="2025-02-13T20:02:33.377767737Z" level=info msg="StopPodSandbox for \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\" returns successfully" Feb 13 20:02:33.378310 containerd[1466]: time="2025-02-13T20:02:33.378278325Z" level=info msg="RemovePodSandbox for \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\"" Feb 13 20:02:33.378310 containerd[1466]: time="2025-02-13T20:02:33.378313872Z" level=info msg="Forcibly stopping sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\"" Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.409 [WARNING][5229] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0", GenerateName:"calico-apiserver-7f5ffc687c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae5d8c7e-8ef1-493b-98f0-c5400cc3d726", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5ffc687c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52", Pod:"calico-apiserver-7f5ffc687c-5bd4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie23c97173c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.410 [INFO][5229] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.410 [INFO][5229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" iface="eth0" netns="" Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.410 [INFO][5229] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.410 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.428 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" HandleID="k8s-pod-network.59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.429 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.429 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.433 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" HandleID="k8s-pod-network.59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.433 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" HandleID="k8s-pod-network.59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--5bd4w-eth0" Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.435 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.439592 containerd[1466]: 2025-02-13 20:02:33.437 [INFO][5229] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd" Feb 13 20:02:33.440035 containerd[1466]: time="2025-02-13T20:02:33.439624483Z" level=info msg="TearDown network for sandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\" successfully" Feb 13 20:02:33.460402 containerd[1466]: time="2025-02-13T20:02:33.460354949Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:02:33.460402 containerd[1466]: time="2025-02-13T20:02:33.460413200Z" level=info msg="RemovePodSandbox \"59c7872c6bc4b8511259558ae5011f99c36f113ce76df19db3317d5aad2a68cd\" returns successfully" Feb 13 20:02:33.460919 containerd[1466]: time="2025-02-13T20:02:33.460886397Z" level=info msg="StopPodSandbox for \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\"" Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.495 [WARNING][5260] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0", GenerateName:"calico-apiserver-7f5ffc687c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fd0a0c8-d966-43b0-a129-ee14e139b86b", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5ffc687c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242", Pod:"calico-apiserver-7f5ffc687c-lbc64", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43ba2bdc287", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.495 [INFO][5260] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.495 [INFO][5260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" iface="eth0" netns="" Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.495 [INFO][5260] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.495 [INFO][5260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.514 [INFO][5267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" HandleID="k8s-pod-network.cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.514 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.514 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.519 [WARNING][5267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" HandleID="k8s-pod-network.cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.519 [INFO][5267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" HandleID="k8s-pod-network.cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.520 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.524960 containerd[1466]: 2025-02-13 20:02:33.522 [INFO][5260] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:33.525403 containerd[1466]: time="2025-02-13T20:02:33.524997258Z" level=info msg="TearDown network for sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\" successfully" Feb 13 20:02:33.525403 containerd[1466]: time="2025-02-13T20:02:33.525025071Z" level=info msg="StopPodSandbox for \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\" returns successfully" Feb 13 20:02:33.525606 containerd[1466]: time="2025-02-13T20:02:33.525574924Z" level=info msg="RemovePodSandbox for \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\"" Feb 13 20:02:33.533434 containerd[1466]: time="2025-02-13T20:02:33.533399973Z" level=info msg="Forcibly stopping sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\"" Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.565 [WARNING][5289] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0", GenerateName:"calico-apiserver-7f5ffc687c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fd0a0c8-d966-43b0-a129-ee14e139b86b", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f5ffc687c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242", Pod:"calico-apiserver-7f5ffc687c-lbc64", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43ba2bdc287", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.565 [INFO][5289] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.565 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" iface="eth0" netns="" Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.565 [INFO][5289] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.565 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.587 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" HandleID="k8s-pod-network.cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.587 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.587 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.592 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" HandleID="k8s-pod-network.cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.592 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" HandleID="k8s-pod-network.cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Workload="localhost-k8s-calico--apiserver--7f5ffc687c--lbc64-eth0" Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.593 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.598134 containerd[1466]: 2025-02-13 20:02:33.595 [INFO][5289] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869" Feb 13 20:02:33.598134 containerd[1466]: time="2025-02-13T20:02:33.598082249Z" level=info msg="TearDown network for sandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\" successfully" Feb 13 20:02:33.724064 containerd[1466]: time="2025-02-13T20:02:33.724016863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:02:33.724189 containerd[1466]: time="2025-02-13T20:02:33.724108809Z" level=info msg="RemovePodSandbox \"cf28440636f160df6519eb06282916d0d77881254e495db69b6c7dbdf037d869\" returns successfully" Feb 13 20:02:33.724638 containerd[1466]: time="2025-02-13T20:02:33.724584851Z" level=info msg="StopPodSandbox for \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\"" Feb 13 20:02:33.725500 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:48574.service - OpenSSH per-connection server daemon (10.0.0.1:48574). Feb 13 20:02:33.772310 sshd[5308]: Accepted publickey for core from 10.0.0.1 port 48574 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:33.774315 sshd[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:33.778734 systemd-logind[1454]: New session 18 of user core. Feb 13 20:02:33.785285 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.758 [WARNING][5324] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gfdrh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2516eb1f-4a76-4950-92c4-3225425d63a6", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd", Pod:"csi-node-driver-gfdrh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia38fc8013d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.759 [INFO][5324] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.759 [INFO][5324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" iface="eth0" netns="" Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.759 [INFO][5324] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.759 [INFO][5324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.778 [INFO][5333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" HandleID="k8s-pod-network.0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.779 [INFO][5333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.779 [INFO][5333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.784 [WARNING][5333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" HandleID="k8s-pod-network.0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.784 [INFO][5333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" HandleID="k8s-pod-network.0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.785 [INFO][5333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.789864 containerd[1466]: 2025-02-13 20:02:33.787 [INFO][5324] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:33.790263 containerd[1466]: time="2025-02-13T20:02:33.789898846Z" level=info msg="TearDown network for sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\" successfully" Feb 13 20:02:33.790263 containerd[1466]: time="2025-02-13T20:02:33.789925817Z" level=info msg="StopPodSandbox for \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\" returns successfully" Feb 13 20:02:33.790520 containerd[1466]: time="2025-02-13T20:02:33.790459018Z" level=info msg="RemovePodSandbox for \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\"" Feb 13 20:02:33.790520 containerd[1466]: time="2025-02-13T20:02:33.790493924Z" level=info msg="Forcibly stopping sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\"" Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.826 [WARNING][5358] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gfdrh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2516eb1f-4a76-4950-92c4-3225425d63a6", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd", Pod:"csi-node-driver-gfdrh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia38fc8013d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.826 [INFO][5358] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.826 [INFO][5358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" iface="eth0" netns="" Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.826 [INFO][5358] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.826 [INFO][5358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.850 [INFO][5365] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" HandleID="k8s-pod-network.0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.850 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.850 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.855 [WARNING][5365] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" HandleID="k8s-pod-network.0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.856 [INFO][5365] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" HandleID="k8s-pod-network.0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Workload="localhost-k8s-csi--node--driver--gfdrh-eth0" Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.857 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.862625 containerd[1466]: 2025-02-13 20:02:33.859 [INFO][5358] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa" Feb 13 20:02:33.862625 containerd[1466]: time="2025-02-13T20:02:33.862584532Z" level=info msg="TearDown network for sandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\" successfully" Feb 13 20:02:33.866872 containerd[1466]: time="2025-02-13T20:02:33.866593026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:02:33.866872 containerd[1466]: time="2025-02-13T20:02:33.866654744Z" level=info msg="RemovePodSandbox \"0d781767e816eb80d392efbfba0668f42c99012be3fdb8bd60ad7abda155abaa\" returns successfully" Feb 13 20:02:33.867272 containerd[1466]: time="2025-02-13T20:02:33.867164941Z" level=info msg="StopPodSandbox for \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\"" Feb 13 20:02:33.943873 sshd[5308]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:33.950343 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:48574.service: Deactivated successfully. Feb 13 20:02:33.953795 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:02:33.956427 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.905 [WARNING][5396] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0", GenerateName:"calico-kube-controllers-7866456b95-", Namespace:"calico-system", SelfLink:"", UID:"49be02d3-1172-42eb-afb4-696e59a6f97d", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7866456b95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e", Pod:"calico-kube-controllers-7866456b95-h7zp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85b986f2579", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.906 [INFO][5396] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.906 [INFO][5396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" iface="eth0" netns="" Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.906 [INFO][5396] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.906 [INFO][5396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.942 [INFO][5403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" HandleID="k8s-pod-network.70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.942 [INFO][5403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.942 [INFO][5403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.948 [WARNING][5403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" HandleID="k8s-pod-network.70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.948 [INFO][5403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" HandleID="k8s-pod-network.70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.951 [INFO][5403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:33.957495 containerd[1466]: 2025-02-13 20:02:33.954 [INFO][5396] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:33.957888 containerd[1466]: time="2025-02-13T20:02:33.957536044Z" level=info msg="TearDown network for sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\" successfully" Feb 13 20:02:33.957888 containerd[1466]: time="2025-02-13T20:02:33.957559659Z" level=info msg="StopPodSandbox for \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\" returns successfully" Feb 13 20:02:33.957904 systemd-logind[1454]: Removed session 18. Feb 13 20:02:33.958399 containerd[1466]: time="2025-02-13T20:02:33.958363648Z" level=info msg="RemovePodSandbox for \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\"" Feb 13 20:02:33.958435 containerd[1466]: time="2025-02-13T20:02:33.958409405Z" level=info msg="Forcibly stopping sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\"" Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:33.996 [WARNING][5427] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0", GenerateName:"calico-kube-controllers-7866456b95-", Namespace:"calico-system", SelfLink:"", UID:"49be02d3-1172-42eb-afb4-696e59a6f97d", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7866456b95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e", Pod:"calico-kube-controllers-7866456b95-h7zp8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85b986f2579", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:33.997 [INFO][5427] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:33.997 [INFO][5427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" iface="eth0" netns="" Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:33.997 [INFO][5427] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:33.997 [INFO][5427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:34.019 [INFO][5435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" HandleID="k8s-pod-network.70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:34.019 [INFO][5435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:34.019 [INFO][5435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:34.024 [WARNING][5435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" HandleID="k8s-pod-network.70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:34.024 [INFO][5435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" HandleID="k8s-pod-network.70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:34.026 [INFO][5435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:34.031136 containerd[1466]: 2025-02-13 20:02:34.028 [INFO][5427] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb" Feb 13 20:02:34.032361 containerd[1466]: time="2025-02-13T20:02:34.031180035Z" level=info msg="TearDown network for sandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\" successfully" Feb 13 20:02:34.389945 containerd[1466]: time="2025-02-13T20:02:34.389881538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:02:34.390118 containerd[1466]: time="2025-02-13T20:02:34.389981670Z" level=info msg="RemovePodSandbox \"70c8fb1137714ebf9adb04483d5835d3b0a507d8492a073e9aa5e19f3c6babcb\" returns successfully" Feb 13 20:02:35.733065 containerd[1466]: time="2025-02-13T20:02:35.732998946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:35.734186 containerd[1466]: time="2025-02-13T20:02:35.734115621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:02:35.735496 containerd[1466]: time="2025-02-13T20:02:35.735452869Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:35.738365 containerd[1466]: time="2025-02-13T20:02:35.738292409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:35.739058 containerd[1466]: time="2025-02-13T20:02:35.739011213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.230104304s" Feb 13 20:02:35.739058 containerd[1466]: time="2025-02-13T20:02:35.739047403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:02:35.740643 containerd[1466]: time="2025-02-13T20:02:35.740617134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:02:35.741403 containerd[1466]: time="2025-02-13T20:02:35.741361869Z" level=info msg="CreateContainer within sandbox \"891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:02:35.759527 containerd[1466]: time="2025-02-13T20:02:35.759455100Z" level=info msg="CreateContainer within sandbox \"891974663d86be6abb0843a21b7e3e784fe0c1c4a17046689e118073b323ca52\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"94c0aec4167b9f59bdb3217cea7177cfd94990c4a658f3f1ce47e6544b5cbbe5\"" Feb 13 20:02:35.760564 containerd[1466]: time="2025-02-13T20:02:35.760520448Z" level=info msg="StartContainer for \"94c0aec4167b9f59bdb3217cea7177cfd94990c4a658f3f1ce47e6544b5cbbe5\"" Feb 13 20:02:35.791803 systemd[1]: run-containerd-runc-k8s.io-94c0aec4167b9f59bdb3217cea7177cfd94990c4a658f3f1ce47e6544b5cbbe5-runc.aRdSfi.mount: Deactivated successfully. Feb 13 20:02:35.811273 systemd[1]: Started cri-containerd-94c0aec4167b9f59bdb3217cea7177cfd94990c4a658f3f1ce47e6544b5cbbe5.scope - libcontainer container 94c0aec4167b9f59bdb3217cea7177cfd94990c4a658f3f1ce47e6544b5cbbe5. Feb 13 20:02:35.857150 containerd[1466]: time="2025-02-13T20:02:35.857050079Z" level=info msg="StartContainer for \"94c0aec4167b9f59bdb3217cea7177cfd94990c4a658f3f1ce47e6544b5cbbe5\" returns successfully" Feb 13 20:02:36.429157 kubelet[2639]: I0213 20:02:36.428819 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f5ffc687c-5bd4w" podStartSLOduration=34.023159212 podStartE2EDuration="42.428803905s" podCreationTimestamp="2025-02-13 20:01:54 +0000 UTC" firstStartedPulling="2025-02-13 20:02:27.334282853 +0000 UTC m=+54.393019764" lastFinishedPulling="2025-02-13 20:02:35.739927556 +0000 UTC m=+62.798664457" observedRunningTime="2025-02-13 20:02:36.428490025 +0000 UTC m=+63.487226936" watchObservedRunningTime="2025-02-13 20:02:36.428803905 +0000 UTC m=+63.487540816" Feb 13 20:02:38.534266 containerd[1466]: time="2025-02-13T20:02:38.534214455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:38.569888 containerd[1466]: time="2025-02-13T20:02:38.569776796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:02:38.623173 containerd[1466]: time="2025-02-13T20:02:38.623081067Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:38.695254 containerd[1466]: time="2025-02-13T20:02:38.695204106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:38.695900 containerd[1466]: time="2025-02-13T20:02:38.695861201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.955185805s" Feb 13 20:02:38.695937 containerd[1466]: time="2025-02-13T20:02:38.695898112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:02:38.696665 containerd[1466]: time="2025-02-13T20:02:38.696646681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:02:38.697808 containerd[1466]: time="2025-02-13T20:02:38.697761631Z" level=info msg="CreateContainer within sandbox \"91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:02:38.957335 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:48578.service - OpenSSH per-connection server daemon (10.0.0.1:48578). Feb 13 20:02:38.998398 sshd[5509]: Accepted publickey for core from 10.0.0.1 port 48578 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:39.000202 sshd[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:39.004494 systemd-logind[1454]: New session 19 of user core. Feb 13 20:02:39.009300 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:02:39.162639 sshd[5509]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:39.166763 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:48578.service: Deactivated successfully. Feb 13 20:02:39.168849 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:02:39.170463 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:02:39.172328 systemd-logind[1454]: Removed session 19. Feb 13 20:02:39.434417 containerd[1466]: time="2025-02-13T20:02:39.434361934Z" level=info msg="CreateContainer within sandbox \"91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"43570631e8e187a0a85ebeecf0cb5fa62b982af320e3b3071e4269aaa3d1836a\"" Feb 13 20:02:39.435039 containerd[1466]: time="2025-02-13T20:02:39.435017044Z" level=info msg="StartContainer for \"43570631e8e187a0a85ebeecf0cb5fa62b982af320e3b3071e4269aaa3d1836a\"" Feb 13 20:02:39.467235 systemd[1]: Started cri-containerd-43570631e8e187a0a85ebeecf0cb5fa62b982af320e3b3071e4269aaa3d1836a.scope - libcontainer container 43570631e8e187a0a85ebeecf0cb5fa62b982af320e3b3071e4269aaa3d1836a. Feb 13 20:02:39.746127 containerd[1466]: time="2025-02-13T20:02:39.745983729Z" level=info msg="StartContainer for \"43570631e8e187a0a85ebeecf0cb5fa62b982af320e3b3071e4269aaa3d1836a\" returns successfully" Feb 13 20:02:40.957198 containerd[1466]: time="2025-02-13T20:02:40.957143582Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:41.017758 containerd[1466]: time="2025-02-13T20:02:41.017677185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:02:41.020133 containerd[1466]: time="2025-02-13T20:02:41.020085788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.323405674s" Feb 13 20:02:41.020201 containerd[1466]: time="2025-02-13T20:02:41.020137747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:02:41.021072 containerd[1466]: time="2025-02-13T20:02:41.021024548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:02:41.022481 containerd[1466]: time="2025-02-13T20:02:41.022437273Z" level=info msg="CreateContainer within sandbox \"de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:02:41.451224 containerd[1466]: time="2025-02-13T20:02:41.451169122Z" level=info msg="CreateContainer within sandbox \"de62f94a40fb54c55d213295980cd1c93f8f2b3b9b7aba70e13ea1637e0f9242\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"783fc5ead275ff2a45fb0aff58330b182672aa78e80775e1a2bb943d1e000969\"" Feb 13 20:02:41.451728 containerd[1466]: time="2025-02-13T20:02:41.451696117Z" level=info msg="StartContainer for \"783fc5ead275ff2a45fb0aff58330b182672aa78e80775e1a2bb943d1e000969\"" Feb 13 20:02:41.489251 systemd[1]: Started cri-containerd-783fc5ead275ff2a45fb0aff58330b182672aa78e80775e1a2bb943d1e000969.scope - libcontainer container 783fc5ead275ff2a45fb0aff58330b182672aa78e80775e1a2bb943d1e000969. Feb 13 20:02:41.802378 kubelet[2639]: E0213 20:02:41.802348 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:41.991957 containerd[1466]: time="2025-02-13T20:02:41.991888456Z" level=info msg="StartContainer for \"783fc5ead275ff2a45fb0aff58330b182672aa78e80775e1a2bb943d1e000969\" returns successfully" Feb 13 20:02:42.623957 kubelet[2639]: I0213 20:02:42.623873 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f5ffc687c-lbc64" podStartSLOduration=36.327200801 podStartE2EDuration="48.623830458s" podCreationTimestamp="2025-02-13 20:01:54 +0000 UTC" firstStartedPulling="2025-02-13 20:02:28.724292846 +0000 UTC m=+55.783029757" lastFinishedPulling="2025-02-13 20:02:41.020922503 +0000 UTC m=+68.079659414" observedRunningTime="2025-02-13 20:02:42.622155294 +0000 UTC m=+69.680892215" watchObservedRunningTime="2025-02-13 20:02:42.623830458 +0000 UTC m=+69.682567369" Feb 13 20:02:43.989366 containerd[1466]: time="2025-02-13T20:02:43.989304786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:43.990438 containerd[1466]: time="2025-02-13T20:02:43.990381788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:02:43.991854 containerd[1466]: time="2025-02-13T20:02:43.991813987Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:43.994515 containerd[1466]: time="2025-02-13T20:02:43.994454028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:02:43.995174 containerd[1466]: time="2025-02-13T20:02:43.995133233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.974073117s" Feb 13 20:02:43.995174 containerd[1466]: time="2025-02-13T20:02:43.995164362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:02:43.997124 containerd[1466]: time="2025-02-13T20:02:43.997080844Z" level=info msg="CreateContainer within sandbox \"91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:02:44.012714 containerd[1466]: time="2025-02-13T20:02:44.012653290Z" level=info msg="CreateContainer within sandbox \"91d83ac0864330f0cb034ce1d2f313c5aac3124abfea1ec9845f189865c250bd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"076b168b6998cd00be3cb3d933f6ea052be2be7edfcc9cb5f5c5a45356db602c\"" Feb 13 20:02:44.013302 containerd[1466]: time="2025-02-13T20:02:44.013273121Z" level=info msg="StartContainer for \"076b168b6998cd00be3cb3d933f6ea052be2be7edfcc9cb5f5c5a45356db602c\"" Feb 13 20:02:44.023606 kubelet[2639]: E0213 20:02:44.023572 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:02:44.057349 systemd[1]: Started cri-containerd-076b168b6998cd00be3cb3d933f6ea052be2be7edfcc9cb5f5c5a45356db602c.scope - libcontainer container 076b168b6998cd00be3cb3d933f6ea052be2be7edfcc9cb5f5c5a45356db602c. Feb 13 20:02:44.101028 containerd[1466]: time="2025-02-13T20:02:44.100969400Z" level=info msg="StartContainer for \"076b168b6998cd00be3cb3d933f6ea052be2be7edfcc9cb5f5c5a45356db602c\" returns successfully" Feb 13 20:02:44.103813 kubelet[2639]: I0213 20:02:44.103765 2639 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:02:44.103813 kubelet[2639]: I0213 20:02:44.103795 2639 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:02:44.178171 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:34670.service - OpenSSH per-connection server daemon (10.0.0.1:34670). Feb 13 20:02:44.219067 sshd[5685]: Accepted publickey for core from 10.0.0.1 port 34670 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:44.220840 sshd[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:44.224861 systemd-logind[1454]: New session 20 of user core. Feb 13 20:02:44.232237 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:02:44.367368 sshd[5685]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:44.372080 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:34670.service: Deactivated successfully. Feb 13 20:02:44.374344 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:02:44.376355 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:02:44.377920 systemd-logind[1454]: Removed session 20. Feb 13 20:02:44.406410 kubelet[2639]: I0213 20:02:44.406341 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gfdrh" podStartSLOduration=35.088283183 podStartE2EDuration="50.406321304s" podCreationTimestamp="2025-02-13 20:01:54 +0000 UTC" firstStartedPulling="2025-02-13 20:02:28.677833299 +0000 UTC m=+55.736570210" lastFinishedPulling="2025-02-13 20:02:43.99587142 +0000 UTC m=+71.054608331" observedRunningTime="2025-02-13 20:02:44.406071948 +0000 UTC m=+71.464808880" watchObservedRunningTime="2025-02-13 20:02:44.406321304 +0000 UTC m=+71.465058215" Feb 13 20:02:49.380670 systemd[1]: Started sshd@20-10.0.0.119:22-10.0.0.1:39804.service - OpenSSH per-connection server daemon (10.0.0.1:39804). Feb 13 20:02:49.424659 sshd[5710]: Accepted publickey for core from 10.0.0.1 port 39804 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:49.426523 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:49.430679 systemd-logind[1454]: New session 21 of user core. Feb 13 20:02:49.440318 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:02:49.557756 sshd[5710]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:49.571042 systemd[1]: sshd@20-10.0.0.119:22-10.0.0.1:39804.service: Deactivated successfully. Feb 13 20:02:49.572873 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:02:49.574587 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:02:49.583697 systemd[1]: Started sshd@21-10.0.0.119:22-10.0.0.1:39812.service - OpenSSH per-connection server daemon (10.0.0.1:39812). Feb 13 20:02:49.584729 systemd-logind[1454]: Removed session 21. Feb 13 20:02:49.614437 sshd[5724]: Accepted publickey for core from 10.0.0.1 port 39812 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:49.616366 sshd[5724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:49.620900 systemd-logind[1454]: New session 22 of user core. Feb 13 20:02:49.635324 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:02:49.898174 containerd[1466]: time="2025-02-13T20:02:49.896828275Z" level=info msg="StopContainer for \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\" with timeout 300 (s)" Feb 13 20:02:49.899323 containerd[1466]: time="2025-02-13T20:02:49.899269546Z" level=info msg="Stop container \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\" with signal terminated" Feb 13 20:02:50.032987 sshd[5724]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:50.042454 systemd[1]: sshd@21-10.0.0.119:22-10.0.0.1:39812.service: Deactivated successfully. Feb 13 20:02:50.044411 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:02:50.045151 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:02:50.053333 systemd[1]: Started sshd@22-10.0.0.119:22-10.0.0.1:39816.service - OpenSSH per-connection server daemon (10.0.0.1:39816). Feb 13 20:02:50.053911 systemd-logind[1454]: Removed session 22. Feb 13 20:02:50.089440 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 39816 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:50.091754 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:50.097539 systemd-logind[1454]: New session 23 of user core. Feb 13 20:02:50.102269 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:02:50.210287 containerd[1466]: time="2025-02-13T20:02:50.210146099Z" level=info msg="StopContainer for \"b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f\" with timeout 30 (s)" Feb 13 20:02:50.212128 containerd[1466]: time="2025-02-13T20:02:50.210867020Z" level=info msg="Stop container \"b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f\" with signal terminated" Feb 13 20:02:50.227547 systemd[1]: cri-containerd-b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f.scope: Deactivated successfully. Feb 13 20:02:50.257798 containerd[1466]: time="2025-02-13T20:02:50.257718852Z" level=info msg="shim disconnected" id=b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f namespace=k8s.io Feb 13 20:02:50.258255 containerd[1466]: time="2025-02-13T20:02:50.258120365Z" level=warning msg="cleaning up after shim disconnected" id=b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f namespace=k8s.io Feb 13 20:02:50.258255 containerd[1466]: time="2025-02-13T20:02:50.258142516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:02:50.264725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f-rootfs.mount: Deactivated successfully. Feb 13 20:02:50.349062 containerd[1466]: time="2025-02-13T20:02:50.349009046Z" level=info msg="StopContainer for \"b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f\" returns successfully" Feb 13 20:02:50.349904 containerd[1466]: time="2025-02-13T20:02:50.349627552Z" level=info msg="StopPodSandbox for \"a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e\"" Feb 13 20:02:50.349904 containerd[1466]: time="2025-02-13T20:02:50.349672619Z" level=info msg="Container to stop \"b27430fc5ae85338565b77060e7af14d3e03fb63c781246844838a4f9f60a97f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:02:50.354307 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e-shm.mount: Deactivated successfully. Feb 13 20:02:50.359672 systemd[1]: cri-containerd-a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e.scope: Deactivated successfully. Feb 13 20:02:50.378495 containerd[1466]: time="2025-02-13T20:02:50.378301249Z" level=info msg="shim disconnected" id=a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e namespace=k8s.io Feb 13 20:02:50.378495 containerd[1466]: time="2025-02-13T20:02:50.378363417Z" level=warning msg="cleaning up after shim disconnected" id=a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e namespace=k8s.io Feb 13 20:02:50.378495 containerd[1466]: time="2025-02-13T20:02:50.378371673Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:02:50.381464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e-rootfs.mount: Deactivated successfully. Feb 13 20:02:50.405963 kubelet[2639]: I0213 20:02:50.405922 2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Feb 13 20:02:50.447025 systemd-networkd[1376]: cali85b986f2579: Link DOWN Feb 13 20:02:50.447035 systemd-networkd[1376]: cali85b986f2579: Lost carrier Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.444 [INFO][5833] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.445 [INFO][5833] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" iface="eth0" netns="/var/run/netns/cni-b00c4bdd-e8c1-c452-f725-15ddcf2b305a" Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.445 [INFO][5833] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" iface="eth0" netns="/var/run/netns/cni-b00c4bdd-e8c1-c452-f725-15ddcf2b305a" Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.467 [INFO][5833] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" after=22.039356ms iface="eth0" netns="/var/run/netns/cni-b00c4bdd-e8c1-c452-f725-15ddcf2b305a" Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.467 [INFO][5833] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.467 [INFO][5833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.485 [INFO][5847] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" HandleID="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.486 [INFO][5847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.486 [INFO][5847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.513 [INFO][5847] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" HandleID="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.514 [INFO][5847] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" HandleID="k8s-pod-network.a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Workload="localhost-k8s-calico--kube--controllers--7866456b95--h7zp8-eth0" Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.515 [INFO][5847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:50.520940 containerd[1466]: 2025-02-13 20:02:50.517 [INFO][5833] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e" Feb 13 20:02:50.521644 containerd[1466]: time="2025-02-13T20:02:50.521269761Z" level=info msg="TearDown network for sandbox \"a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e\" successfully" Feb 13 20:02:50.521644 containerd[1466]: time="2025-02-13T20:02:50.521299337Z" level=info msg="StopPodSandbox for \"a319d098a9821b50ff986285d6b4e4260bf24315e223effe1b307649200b3a2e\" returns successfully" Feb 13 20:02:50.524328 systemd[1]: run-netns-cni\x2db00c4bdd\x2de8c1\x2dc452\x2df725\x2d15ddcf2b305a.mount: Deactivated successfully. Feb 13 20:02:50.606541 kubelet[2639]: I0213 20:02:50.606491 2639 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9n54\" (UniqueName: \"kubernetes.io/projected/49be02d3-1172-42eb-afb4-696e59a6f97d-kube-api-access-p9n54\") pod \"49be02d3-1172-42eb-afb4-696e59a6f97d\" (UID: \"49be02d3-1172-42eb-afb4-696e59a6f97d\") " Feb 13 20:02:50.606541 kubelet[2639]: I0213 20:02:50.606543 2639 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49be02d3-1172-42eb-afb4-696e59a6f97d-tigera-ca-bundle\") pod \"49be02d3-1172-42eb-afb4-696e59a6f97d\" (UID: \"49be02d3-1172-42eb-afb4-696e59a6f97d\") " Feb 13 20:02:50.609873 kubelet[2639]: I0213 20:02:50.609819 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49be02d3-1172-42eb-afb4-696e59a6f97d-kube-api-access-p9n54" (OuterVolumeSpecName: "kube-api-access-p9n54") pod "49be02d3-1172-42eb-afb4-696e59a6f97d" (UID: "49be02d3-1172-42eb-afb4-696e59a6f97d"). InnerVolumeSpecName "kube-api-access-p9n54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:02:50.612139 systemd[1]: var-lib-kubelet-pods-49be02d3\x2d1172\x2d42eb\x2dafb4\x2d696e59a6f97d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp9n54.mount: Deactivated successfully. Feb 13 20:02:50.625580 kubelet[2639]: I0213 20:02:50.625536 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49be02d3-1172-42eb-afb4-696e59a6f97d-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "49be02d3-1172-42eb-afb4-696e59a6f97d" (UID: "49be02d3-1172-42eb-afb4-696e59a6f97d"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:02:50.707245 kubelet[2639]: I0213 20:02:50.707199 2639 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-p9n54\" (UniqueName: \"kubernetes.io/projected/49be02d3-1172-42eb-afb4-696e59a6f97d-kube-api-access-p9n54\") on node \"localhost\" DevicePath \"\"" Feb 13 20:02:50.707245 kubelet[2639]: I0213 20:02:50.707232 2639 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49be02d3-1172-42eb-afb4-696e59a6f97d-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Feb 13 20:02:51.031901 systemd[1]: Removed slice kubepods-besteffort-pod49be02d3_1172_42eb_afb4_696e59a6f97d.slice - libcontainer container kubepods-besteffort-pod49be02d3_1172_42eb_afb4_696e59a6f97d.slice. Feb 13 20:02:51.262130 systemd[1]: var-lib-kubelet-pods-49be02d3\x2d1172\x2d42eb\x2dafb4\x2d696e59a6f97d-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Feb 13 20:02:51.445872 kubelet[2639]: I0213 20:02:51.445554 2639 topology_manager.go:215] "Topology Admit Handler" podUID="a4439b49-3a29-4cdf-8e50-5d81eab18333" podNamespace="calico-system" podName="calico-kube-controllers-787f98ff6b-cwqqp" Feb 13 20:02:51.446635 kubelet[2639]: E0213 20:02:51.446572 2639 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="49be02d3-1172-42eb-afb4-696e59a6f97d" containerName="calico-kube-controllers" Feb 13 20:02:51.446635 kubelet[2639]: I0213 20:02:51.446626 2639 memory_manager.go:354] "RemoveStaleState removing state" podUID="49be02d3-1172-42eb-afb4-696e59a6f97d" containerName="calico-kube-controllers" Feb 13 20:02:51.460766 systemd[1]: Created slice kubepods-besteffort-poda4439b49_3a29_4cdf_8e50_5d81eab18333.slice - libcontainer container kubepods-besteffort-poda4439b49_3a29_4cdf_8e50_5d81eab18333.slice. Feb 13 20:02:51.511965 kubelet[2639]: I0213 20:02:51.511783 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4439b49-3a29-4cdf-8e50-5d81eab18333-tigera-ca-bundle\") pod \"calico-kube-controllers-787f98ff6b-cwqqp\" (UID: \"a4439b49-3a29-4cdf-8e50-5d81eab18333\") " pod="calico-system/calico-kube-controllers-787f98ff6b-cwqqp" Feb 13 20:02:51.511965 kubelet[2639]: I0213 20:02:51.511839 2639 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkm5j\" (UniqueName: \"kubernetes.io/projected/a4439b49-3a29-4cdf-8e50-5d81eab18333-kube-api-access-fkm5j\") pod \"calico-kube-controllers-787f98ff6b-cwqqp\" (UID: \"a4439b49-3a29-4cdf-8e50-5d81eab18333\") " pod="calico-system/calico-kube-controllers-787f98ff6b-cwqqp" Feb 13 20:02:51.764624 containerd[1466]: time="2025-02-13T20:02:51.764464457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-787f98ff6b-cwqqp,Uid:a4439b49-3a29-4cdf-8e50-5d81eab18333,Namespace:calico-system,Attempt:0,}" Feb 13 20:02:51.996153 systemd-networkd[1376]: cali4513f215065: Link UP Feb 13 20:02:51.996348 systemd-networkd[1376]: cali4513f215065: Gained carrier Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.831 [INFO][5871] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0 calico-kube-controllers-787f98ff6b- calico-system a4439b49-3a29-4cdf-8e50-5d81eab18333 1263 0 2025-02-13 20:02:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:787f98ff6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-787f98ff6b-cwqqp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4513f215065 [] []}} ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Namespace="calico-system" Pod="calico-kube-controllers-787f98ff6b-cwqqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.831 [INFO][5871] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Namespace="calico-system" Pod="calico-kube-controllers-787f98ff6b-cwqqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.860 [INFO][5884] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" HandleID="k8s-pod-network.fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Workload="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.913 [INFO][5884] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" HandleID="k8s-pod-network.fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Workload="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aeba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-787f98ff6b-cwqqp", "timestamp":"2025-02-13 20:02:51.860626687 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.913 [INFO][5884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.913 [INFO][5884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.913 [INFO][5884] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.935 [INFO][5884] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" host="localhost" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.945 [INFO][5884] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.955 [INFO][5884] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.959 [INFO][5884] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.964 [INFO][5884] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.964 [INFO][5884] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" host="localhost" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.965 [INFO][5884] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.973 [INFO][5884] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" host="localhost" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.983 [INFO][5884] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" host="localhost" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.983 [INFO][5884] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" host="localhost" Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.983 [INFO][5884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:02:52.023534 containerd[1466]: 2025-02-13 20:02:51.983 [INFO][5884] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" HandleID="k8s-pod-network.fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Workload="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0" Feb 13 20:02:52.027568 containerd[1466]: 2025-02-13 20:02:51.992 [INFO][5871] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Namespace="calico-system" Pod="calico-kube-controllers-787f98ff6b-cwqqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0", GenerateName:"calico-kube-controllers-787f98ff6b-", Namespace:"calico-system", SelfLink:"", UID:"a4439b49-3a29-4cdf-8e50-5d81eab18333", ResourceVersion:"1263", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"787f98ff6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-787f98ff6b-cwqqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4513f215065", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:52.027568 containerd[1466]: 2025-02-13 20:02:51.992 [INFO][5871] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Namespace="calico-system" Pod="calico-kube-controllers-787f98ff6b-cwqqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0" Feb 13 20:02:52.027568 containerd[1466]: 2025-02-13 20:02:51.992 [INFO][5871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4513f215065 ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Namespace="calico-system" Pod="calico-kube-controllers-787f98ff6b-cwqqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0" Feb 13 20:02:52.027568 containerd[1466]: 2025-02-13 20:02:51.995 [INFO][5871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Namespace="calico-system" Pod="calico-kube-controllers-787f98ff6b-cwqqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0" Feb 13 20:02:52.027568 containerd[1466]: 2025-02-13 20:02:51.995 [INFO][5871] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Namespace="calico-system" Pod="calico-kube-controllers-787f98ff6b-cwqqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0", GenerateName:"calico-kube-controllers-787f98ff6b-", Namespace:"calico-system", SelfLink:"", UID:"a4439b49-3a29-4cdf-8e50-5d81eab18333", ResourceVersion:"1263", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 2, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"787f98ff6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd", Pod:"calico-kube-controllers-787f98ff6b-cwqqp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4513f215065", MAC:"2a:83:3c:20:fc:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:02:52.027568 containerd[1466]: 2025-02-13 20:02:52.018 [INFO][5871] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd" Namespace="calico-system" Pod="calico-kube-controllers-787f98ff6b-cwqqp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--787f98ff6b--cwqqp-eth0" Feb 13 20:02:52.061185 sshd[5747]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:52.071827 systemd[1]: sshd@22-10.0.0.119:22-10.0.0.1:39816.service: Deactivated successfully. Feb 13 20:02:52.073617 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:02:52.074510 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:02:52.083910 systemd[1]: Started sshd@23-10.0.0.119:22-10.0.0.1:39820.service - OpenSSH per-connection server daemon (10.0.0.1:39820). Feb 13 20:02:52.086829 systemd-logind[1454]: Removed session 23. Feb 13 20:02:52.087653 containerd[1466]: time="2025-02-13T20:02:52.087535020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:02:52.087960 containerd[1466]: time="2025-02-13T20:02:52.087868313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:02:52.089382 containerd[1466]: time="2025-02-13T20:02:52.089190885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:52.089788 containerd[1466]: time="2025-02-13T20:02:52.089373081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:02:52.126362 systemd[1]: Started cri-containerd-fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd.scope - libcontainer container fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd. Feb 13 20:02:52.135621 sshd[5936]: Accepted publickey for core from 10.0.0.1 port 39820 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:52.137663 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:52.142905 systemd-resolved[1361]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 20:02:52.142985 systemd-logind[1454]: New session 24 of user core. Feb 13 20:02:52.153229 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:02:52.169159 containerd[1466]: time="2025-02-13T20:02:52.169115684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-787f98ff6b-cwqqp,Uid:a4439b49-3a29-4cdf-8e50-5d81eab18333,Namespace:calico-system,Attempt:0,} returns sandbox id \"fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd\"" Feb 13 20:02:52.178879 containerd[1466]: time="2025-02-13T20:02:52.178831131Z" level=info msg="CreateContainer within sandbox \"fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:02:52.635698 sshd[5936]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:52.643211 systemd[1]: sshd@23-10.0.0.119:22-10.0.0.1:39820.service: Deactivated successfully. Feb 13 20:02:52.645887 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:02:52.648698 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:02:52.653176 containerd[1466]: time="2025-02-13T20:02:52.653129686Z" level=info msg="CreateContainer within sandbox \"fabbb4d6fcc5091edecb2f9161ff90696039bc0a4534fa7ac8e77c8b714b8ecd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d00114965fe78996d0bf371a1274c66d134df84118692fe3b7ec73d159f75a22\"" Feb 13 20:02:52.654482 containerd[1466]: time="2025-02-13T20:02:52.654061787Z" level=info msg="StartContainer for \"d00114965fe78996d0bf371a1274c66d134df84118692fe3b7ec73d159f75a22\"" Feb 13 20:02:52.656871 systemd[1]: Started sshd@24-10.0.0.119:22-10.0.0.1:39826.service - OpenSSH per-connection server daemon (10.0.0.1:39826). Feb 13 20:02:52.660445 systemd-logind[1454]: Removed session 24. Feb 13 20:02:52.688418 systemd[1]: Started cri-containerd-d00114965fe78996d0bf371a1274c66d134df84118692fe3b7ec73d159f75a22.scope - libcontainer container d00114965fe78996d0bf371a1274c66d134df84118692fe3b7ec73d159f75a22. Feb 13 20:02:52.698221 sshd[5986]: Accepted publickey for core from 10.0.0.1 port 39826 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:52.699793 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:52.706992 systemd-logind[1454]: New session 25 of user core. Feb 13 20:02:52.711234 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:02:52.735743 containerd[1466]: time="2025-02-13T20:02:52.735687376Z" level=info msg="StartContainer for \"d00114965fe78996d0bf371a1274c66d134df84118692fe3b7ec73d159f75a22\" returns successfully" Feb 13 20:02:52.847897 sshd[5986]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:52.853533 systemd[1]: sshd@24-10.0.0.119:22-10.0.0.1:39826.service: Deactivated successfully. Feb 13 20:02:52.855851 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:02:52.856722 systemd-logind[1454]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:02:52.857980 systemd-logind[1454]: Removed session 25. Feb 13 20:02:53.026718 kubelet[2639]: I0213 20:02:53.026651 2639 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49be02d3-1172-42eb-afb4-696e59a6f97d" path="/var/lib/kubelet/pods/49be02d3-1172-42eb-afb4-696e59a6f97d/volumes" Feb 13 20:02:53.381296 systemd-networkd[1376]: cali4513f215065: Gained IPv6LL Feb 13 20:02:53.469111 kubelet[2639]: I0213 20:02:53.468535 2639 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-787f98ff6b-cwqqp" podStartSLOduration=2.468514077 podStartE2EDuration="2.468514077s" podCreationTimestamp="2025-02-13 20:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:02:53.424452204 +0000 UTC m=+80.483189115" watchObservedRunningTime="2025-02-13 20:02:53.468514077 +0000 UTC m=+80.527250998" Feb 13 20:02:53.938274 systemd[1]: cri-containerd-dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9.scope: Deactivated successfully. Feb 13 20:02:53.963763 containerd[1466]: time="2025-02-13T20:02:53.963696425Z" level=info msg="shim disconnected" id=dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9 namespace=k8s.io Feb 13 20:02:53.963763 containerd[1466]: time="2025-02-13T20:02:53.963758522Z" level=warning msg="cleaning up after shim disconnected" id=dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9 namespace=k8s.io Feb 13 20:02:53.963763 containerd[1466]: time="2025-02-13T20:02:53.963767019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:02:53.966277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9-rootfs.mount: Deactivated successfully. Feb 13 20:02:53.977793 containerd[1466]: time="2025-02-13T20:02:53.977726791Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:02:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:02:53.996371 containerd[1466]: time="2025-02-13T20:02:53.996311577Z" level=info msg="StopContainer for \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\" returns successfully" Feb 13 20:02:53.996926 containerd[1466]: time="2025-02-13T20:02:53.996819372Z" level=info msg="StopPodSandbox for \"79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35\"" Feb 13 20:02:53.996926 containerd[1466]: time="2025-02-13T20:02:53.996855330Z" level=info msg="Container to stop \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:02:54.002333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35-shm.mount: Deactivated successfully. Feb 13 20:02:54.008024 systemd[1]: cri-containerd-79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35.scope: Deactivated successfully. Feb 13 20:02:54.032154 containerd[1466]: time="2025-02-13T20:02:54.031068061Z" level=info msg="shim disconnected" id=79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35 namespace=k8s.io Feb 13 20:02:54.032154 containerd[1466]: time="2025-02-13T20:02:54.031133276Z" level=warning msg="cleaning up after shim disconnected" id=79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35 namespace=k8s.io Feb 13 20:02:54.032154 containerd[1466]: time="2025-02-13T20:02:54.031141762Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:02:54.033435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35-rootfs.mount: Deactivated successfully. Feb 13 20:02:54.051971 containerd[1466]: time="2025-02-13T20:02:54.051931094Z" level=info msg="TearDown network for sandbox \"79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35\" successfully" Feb 13 20:02:54.051971 containerd[1466]: time="2025-02-13T20:02:54.051964968Z" level=info msg="StopPodSandbox for \"79abb8fe1d9d5198fb00a3385e95c88d0d9f0db5f70ce290e287f60b150bbe35\" returns successfully" Feb 13 20:02:54.231661 kubelet[2639]: I0213 20:02:54.231529 2639 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhtv6\" (UniqueName: \"kubernetes.io/projected/7b5364b4-50fd-4ce1-b857-5ce18dacc684-kube-api-access-lhtv6\") pod \"7b5364b4-50fd-4ce1-b857-5ce18dacc684\" (UID: \"7b5364b4-50fd-4ce1-b857-5ce18dacc684\") " Feb 13 20:02:54.231661 kubelet[2639]: I0213 20:02:54.231573 2639 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b5364b4-50fd-4ce1-b857-5ce18dacc684-tigera-ca-bundle\") pod \"7b5364b4-50fd-4ce1-b857-5ce18dacc684\" (UID: \"7b5364b4-50fd-4ce1-b857-5ce18dacc684\") " Feb 13 20:02:54.231661 kubelet[2639]: I0213 20:02:54.231601 2639 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7b5364b4-50fd-4ce1-b857-5ce18dacc684-typha-certs\") pod \"7b5364b4-50fd-4ce1-b857-5ce18dacc684\" (UID: \"7b5364b4-50fd-4ce1-b857-5ce18dacc684\") " Feb 13 20:02:54.235888 kubelet[2639]: I0213 20:02:54.235851 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5364b4-50fd-4ce1-b857-5ce18dacc684-kube-api-access-lhtv6" (OuterVolumeSpecName: "kube-api-access-lhtv6") pod "7b5364b4-50fd-4ce1-b857-5ce18dacc684" (UID: "7b5364b4-50fd-4ce1-b857-5ce18dacc684"). InnerVolumeSpecName "kube-api-access-lhtv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:02:54.235963 kubelet[2639]: I0213 20:02:54.235894 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5364b4-50fd-4ce1-b857-5ce18dacc684-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "7b5364b4-50fd-4ce1-b857-5ce18dacc684" (UID: "7b5364b4-50fd-4ce1-b857-5ce18dacc684"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:02:54.236845 kubelet[2639]: I0213 20:02:54.236816 2639 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5364b4-50fd-4ce1-b857-5ce18dacc684-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "7b5364b4-50fd-4ce1-b857-5ce18dacc684" (UID: "7b5364b4-50fd-4ce1-b857-5ce18dacc684"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:02:54.237907 systemd[1]: var-lib-kubelet-pods-7b5364b4\x2d50fd\x2d4ce1\x2db857\x2d5ce18dacc684-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlhtv6.mount: Deactivated successfully. Feb 13 20:02:54.238029 systemd[1]: var-lib-kubelet-pods-7b5364b4\x2d50fd\x2d4ce1\x2db857\x2d5ce18dacc684-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 13 20:02:54.332268 kubelet[2639]: I0213 20:02:54.332167 2639 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b5364b4-50fd-4ce1-b857-5ce18dacc684-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Feb 13 20:02:54.332268 kubelet[2639]: I0213 20:02:54.332246 2639 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7b5364b4-50fd-4ce1-b857-5ce18dacc684-typha-certs\") on node \"localhost\" DevicePath \"\"" Feb 13 20:02:54.332268 kubelet[2639]: I0213 20:02:54.332257 2639 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lhtv6\" (UniqueName: \"kubernetes.io/projected/7b5364b4-50fd-4ce1-b857-5ce18dacc684-kube-api-access-lhtv6\") on node \"localhost\" DevicePath \"\"" Feb 13 20:02:54.416466 kubelet[2639]: I0213 20:02:54.416423 2639 scope.go:117] "RemoveContainer" containerID="dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9" Feb 13 20:02:54.418328 containerd[1466]: time="2025-02-13T20:02:54.418284462Z" level=info msg="RemoveContainer for \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\"" Feb 13 20:02:54.422580 systemd[1]: Removed slice kubepods-besteffort-pod7b5364b4_50fd_4ce1_b857_5ce18dacc684.slice - libcontainer container kubepods-besteffort-pod7b5364b4_50fd_4ce1_b857_5ce18dacc684.slice. Feb 13 20:02:54.431441 systemd[1]: var-lib-kubelet-pods-7b5364b4\x2d50fd\x2d4ce1\x2db857\x2d5ce18dacc684-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 13 20:02:54.485681 containerd[1466]: time="2025-02-13T20:02:54.485557001Z" level=info msg="RemoveContainer for \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\" returns successfully" Feb 13 20:02:54.485905 kubelet[2639]: I0213 20:02:54.485881 2639 scope.go:117] "RemoveContainer" containerID="dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9" Feb 13 20:02:54.508216 containerd[1466]: time="2025-02-13T20:02:54.494440338Z" level=error msg="ContainerStatus for \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\": not found" Feb 13 20:02:54.508432 kubelet[2639]: E0213 20:02:54.508389 2639 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\": not found" containerID="dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9" Feb 13 20:02:54.508503 kubelet[2639]: I0213 20:02:54.508431 2639 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9"} err="failed to get container status \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\": rpc error: code = NotFound desc = an error occurred when try to find container \"dcfc0d7ca4f8a6bffc3df10ffbcdaeba6325c1fd63da200dbcc9550762e42ca9\": not found" Feb 13 20:02:55.025841 kubelet[2639]: I0213 20:02:55.025796 2639 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5364b4-50fd-4ce1-b857-5ce18dacc684" path="/var/lib/kubelet/pods/7b5364b4-50fd-4ce1-b857-5ce18dacc684/volumes" Feb 13 20:02:57.863227 systemd[1]: Started sshd@25-10.0.0.119:22-10.0.0.1:39842.service - OpenSSH per-connection server daemon (10.0.0.1:39842). Feb 13 20:02:57.896621 sshd[6265]: Accepted publickey for core from 10.0.0.1 port 39842 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:02:57.898311 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:02:57.902108 systemd-logind[1454]: New session 26 of user core. Feb 13 20:02:57.911214 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:02:58.014545 sshd[6265]: pam_unix(sshd:session): session closed for user core Feb 13 20:02:58.018215 systemd[1]: sshd@25-10.0.0.119:22-10.0.0.1:39842.service: Deactivated successfully. Feb 13 20:02:58.020594 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:02:58.021354 systemd-logind[1454]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:02:58.022395 systemd-logind[1454]: Removed session 26. Feb 13 20:03:01.024444 kubelet[2639]: E0213 20:03:01.023966 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:03:03.023819 kubelet[2639]: E0213 20:03:03.023775 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:03:03.028834 systemd[1]: Started sshd@26-10.0.0.119:22-10.0.0.1:33120.service - OpenSSH per-connection server daemon (10.0.0.1:33120). Feb 13 20:03:03.063997 sshd[6369]: Accepted publickey for core from 10.0.0.1 port 33120 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:03:03.065941 sshd[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:03.071439 systemd-logind[1454]: New session 27 of user core. Feb 13 20:03:03.080367 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:03:03.196887 sshd[6369]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:03.203863 systemd[1]: sshd@26-10.0.0.119:22-10.0.0.1:33120.service: Deactivated successfully. Feb 13 20:03:03.207144 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:03:03.208691 systemd-logind[1454]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:03:03.210409 systemd-logind[1454]: Removed session 27. Feb 13 20:03:04.022995 kubelet[2639]: E0213 20:03:04.022947 2639 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 20:03:08.209733 systemd[1]: Started sshd@27-10.0.0.119:22-10.0.0.1:33136.service - OpenSSH per-connection server daemon (10.0.0.1:33136). Feb 13 20:03:08.245981 sshd[6493]: Accepted publickey for core from 10.0.0.1 port 33136 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:03:08.247675 sshd[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:08.251788 systemd-logind[1454]: New session 28 of user core. Feb 13 20:03:08.258230 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:03:08.367649 sshd[6493]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:08.372197 systemd[1]: sshd@27-10.0.0.119:22-10.0.0.1:33136.service: Deactivated successfully. Feb 13 20:03:08.374422 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:03:08.375008 systemd-logind[1454]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:03:08.375903 systemd-logind[1454]: Removed session 28. Feb 13 20:03:13.379243 systemd[1]: Started sshd@28-10.0.0.119:22-10.0.0.1:42542.service - OpenSSH per-connection server daemon (10.0.0.1:42542). Feb 13 20:03:13.416724 sshd[6621]: Accepted publickey for core from 10.0.0.1 port 42542 ssh2: RSA SHA256:1AKUQv4hMaRYqQWlpL9sCc1VFFYvBMLLM0QK6OFmV8g Feb 13 20:03:13.419371 sshd[6621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:13.425045 systemd-logind[1454]: New session 29 of user core. Feb 13 20:03:13.434268 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:03:13.542306 sshd[6621]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:13.546654 systemd[1]: sshd@28-10.0.0.119:22-10.0.0.1:42542.service: Deactivated successfully. Feb 13 20:03:13.548624 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:03:13.549387 systemd-logind[1454]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:03:13.550238 systemd-logind[1454]: Removed session 29.