Jan 30 13:46:01.890465 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:46:01.890489 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:46:01.890503 kernel: BIOS-provided physical RAM map: Jan 30 13:46:01.890510 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:46:01.890518 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:46:01.890526 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:46:01.890535 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 30 13:46:01.890543 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 30 13:46:01.890551 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:46:01.890562 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 13:46:01.890570 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:46:01.890577 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:46:01.890585 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:46:01.890593 kernel: NX (Execute Disable) protection: active Jan 30 13:46:01.890603 kernel: APIC: Static calls initialized Jan 30 13:46:01.890615 kernel: SMBIOS 2.8 present. Jan 30 13:46:01.890623 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 30 13:46:01.890631 kernel: Hypervisor detected: KVM Jan 30 13:46:01.890639 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:46:01.890648 kernel: kvm-clock: using sched offset of 2227921539 cycles Jan 30 13:46:01.890656 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:46:01.890665 kernel: tsc: Detected 2794.748 MHz processor Jan 30 13:46:01.890674 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:46:01.890684 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:46:01.890692 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 30 13:46:01.890705 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:46:01.890714 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:46:01.890723 kernel: Using GB pages for direct mapping Jan 30 13:46:01.890732 kernel: ACPI: Early table checksum verification disabled Jan 30 13:46:01.890742 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 30 13:46:01.890752 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:01.890761 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:01.890771 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:01.890783 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 30 13:46:01.890793 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:01.890802 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:01.890812 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:01.890822 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:46:01.890831 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 30 13:46:01.890840 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 30 13:46:01.890855 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 30 13:46:01.890867 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 30 13:46:01.890876 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 30 13:46:01.890886 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 30 13:46:01.890895 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 30 13:46:01.890903 kernel: No NUMA configuration found Jan 30 13:46:01.890913 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 30 13:46:01.890923 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 30 13:46:01.890936 kernel: Zone ranges: Jan 30 13:46:01.890945 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:46:01.890955 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 30 13:46:01.890965 kernel: Normal empty Jan 30 13:46:01.890975 kernel: Movable zone start for each node Jan 30 13:46:01.890984 kernel: Early memory node ranges Jan 30 13:46:01.890994 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:46:01.891004 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 30 13:46:01.891014 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 30 13:46:01.891027 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:46:01.891037 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:46:01.891047 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 13:46:01.891056 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:46:01.891066 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:46:01.891076 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:46:01.891086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:46:01.891096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:46:01.891106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:46:01.891119 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:46:01.891145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:46:01.891154 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:46:01.891163 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:46:01.891172 kernel: TSC deadline timer available Jan 30 13:46:01.891180 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:46:01.891189 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:46:01.891199 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:46:01.891209 kernel: kvm-guest: setup PV sched yield Jan 30 13:46:01.891219 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 13:46:01.891232 kernel: Booting paravirtualized kernel on KVM Jan 30 13:46:01.891241 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:46:01.891251 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:46:01.891260 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:46:01.891269 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:46:01.891278 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:46:01.891286 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:46:01.891295 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:46:01.891305 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:46:01.891317 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:46:01.891326 kernel: random: crng init done Jan 30 13:46:01.891334 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:46:01.891343 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:46:01.891353 kernel: Fallback order for Node 0: 0 Jan 30 13:46:01.891362 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 30 13:46:01.891372 kernel: Policy zone: DMA32 Jan 30 13:46:01.891381 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:46:01.891395 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 30 13:46:01.891404 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:46:01.891414 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:46:01.891424 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:46:01.891433 kernel: Dynamic Preempt: voluntary Jan 30 13:46:01.891443 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:46:01.891465 kernel: rcu: RCU event tracing is enabled. Jan 30 13:46:01.891475 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:46:01.891485 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:46:01.891499 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:46:01.891509 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:46:01.891519 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:46:01.891529 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:46:01.891539 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:46:01.891549 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:46:01.891558 kernel: Console: colour VGA+ 80x25 Jan 30 13:46:01.891568 kernel: printk: console [ttyS0] enabled Jan 30 13:46:01.891578 kernel: ACPI: Core revision 20230628 Jan 30 13:46:01.891591 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:46:01.891602 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:46:01.891611 kernel: x2apic enabled Jan 30 13:46:01.891621 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:46:01.891631 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:46:01.891641 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:46:01.891652 kernel: kvm-guest: setup PV IPIs Jan 30 13:46:01.891675 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:46:01.891686 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:46:01.891697 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 30 13:46:01.891709 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:46:01.891719 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:46:01.891733 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:46:01.891743 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:46:01.891753 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:46:01.891763 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:46:01.891777 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:46:01.891788 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:46:01.891798 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:46:01.891808 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:46:01.891818 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:46:01.891828 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:46:01.891838 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:46:01.891848 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:46:01.891857 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:46:01.891870 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:46:01.891879 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:46:01.891889 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:46:01.891899 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:46:01.891909 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:46:01.891919 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:46:01.891929 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:46:01.891939 kernel: landlock: Up and running. Jan 30 13:46:01.891948 kernel: SELinux: Initializing. Jan 30 13:46:01.891961 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:46:01.891971 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:46:01.891981 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:46:01.891991 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:46:01.892001 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:46:01.892011 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:46:01.892021 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:46:01.892031 kernel: ... version: 0 Jan 30 13:46:01.892044 kernel: ... bit width: 48 Jan 30 13:46:01.892053 kernel: ... generic registers: 6 Jan 30 13:46:01.892063 kernel: ... value mask: 0000ffffffffffff Jan 30 13:46:01.892074 kernel: ... max period: 00007fffffffffff Jan 30 13:46:01.892083 kernel: ... fixed-purpose events: 0 Jan 30 13:46:01.892092 kernel: ... event mask: 000000000000003f Jan 30 13:46:01.892101 kernel: signal: max sigframe size: 1776 Jan 30 13:46:01.892110 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:46:01.892159 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:46:01.892169 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:46:01.892182 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:46:01.892191 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:46:01.892202 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:46:01.892212 kernel: smpboot: Max logical packages: 1 Jan 30 13:46:01.892223 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 30 13:46:01.892233 kernel: devtmpfs: initialized Jan 30 13:46:01.892243 kernel: x86/mm: Memory block size: 128MB Jan 30 13:46:01.892254 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:46:01.892264 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:46:01.892278 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:46:01.892289 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:46:01.892299 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:46:01.892310 kernel: audit: type=2000 audit(1738244761.439:1): state=initialized audit_enabled=0 res=1 Jan 30 13:46:01.892320 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:46:01.892331 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:46:01.892342 kernel: cpuidle: using governor menu Jan 30 13:46:01.892352 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:46:01.892362 kernel: dca service started, version 1.12.1 Jan 30 13:46:01.892376 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:46:01.892387 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:46:01.892398 kernel: PCI: Using configuration type 1 for base access Jan 30 13:46:01.892408 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:46:01.892419 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:46:01.892429 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:46:01.892439 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:46:01.892458 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:46:01.892467 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:46:01.892480 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:46:01.892490 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:46:01.892499 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:46:01.892508 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:46:01.892518 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:46:01.892527 kernel: ACPI: Interpreter enabled Jan 30 13:46:01.892536 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:46:01.892545 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:46:01.892555 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:46:01.892567 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:46:01.892576 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:46:01.892586 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:46:01.892815 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:46:01.892977 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:46:01.893118 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:46:01.893159 kernel: PCI host bridge to bus 0000:00 Jan 30 13:46:01.893326 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:46:01.893466 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:46:01.893595 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:46:01.893724 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:46:01.893870 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:46:01.894016 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 13:46:01.894171 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:46:01.894356 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:46:01.894517 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:46:01.894654 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 30 13:46:01.894811 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 30 13:46:01.894972 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 30 13:46:01.895162 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:46:01.895333 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:46:01.895498 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 13:46:01.895649 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 30 13:46:01.895782 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 30 13:46:01.895913 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:46:01.896037 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:46:01.896178 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 30 13:46:01.896306 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 30 13:46:01.896438 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:46:01.896569 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 30 13:46:01.896689 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 30 13:46:01.896811 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 30 13:46:01.896932 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 30 13:46:01.897060 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:46:01.897209 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:46:01.897337 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:46:01.897464 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 30 13:46:01.897582 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 30 13:46:01.897709 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:46:01.897828 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 13:46:01.897839 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:46:01.897851 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:46:01.897859 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:46:01.897867 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:46:01.897874 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:46:01.897882 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:46:01.897889 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:46:01.897897 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:46:01.897904 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:46:01.897912 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:46:01.897922 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:46:01.897929 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:46:01.897936 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:46:01.897944 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:46:01.897952 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:46:01.897959 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:46:01.897967 kernel: iommu: Default domain type: Translated Jan 30 13:46:01.897975 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:46:01.897982 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:46:01.897992 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:46:01.898000 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:46:01.898007 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 30 13:46:01.898142 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:46:01.898265 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:46:01.898385 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:46:01.898395 kernel: vgaarb: loaded Jan 30 13:46:01.898403 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:46:01.898415 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:46:01.898423 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:46:01.898431 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:46:01.898438 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:46:01.898446 kernel: pnp: PnP ACPI init Jan 30 13:46:01.898587 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:46:01.898598 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:46:01.898608 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:46:01.898630 kernel: NET: Registered PF_INET protocol family Jan 30 13:46:01.898642 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:46:01.898653 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:46:01.898664 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:46:01.898675 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:46:01.898686 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:46:01.898697 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:46:01.898708 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:46:01.898719 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:46:01.898731 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:46:01.898738 kernel: NET: Registered PF_XDP protocol family Jan 30 13:46:01.898864 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:46:01.899036 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:46:01.899177 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:46:01.899287 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:46:01.899406 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:46:01.899541 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 13:46:01.899557 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:46:01.899566 kernel: Initialise system trusted keyrings Jan 30 13:46:01.899574 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:46:01.899581 kernel: Key type asymmetric registered Jan 30 13:46:01.899589 kernel: Asymmetric key parser 'x509' registered Jan 30 13:46:01.899597 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:46:01.899604 kernel: io scheduler mq-deadline registered Jan 30 13:46:01.899612 kernel: io scheduler kyber registered Jan 30 13:46:01.899619 kernel: io scheduler bfq registered Jan 30 13:46:01.899630 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:46:01.899638 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:46:01.899646 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:46:01.899653 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:46:01.899661 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:46:01.899669 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:46:01.899677 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:46:01.899684 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:46:01.899692 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:46:01.899831 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:46:01.899849 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:46:01.899989 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:46:01.900139 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:46:01 UTC (1738244761) Jan 30 13:46:01.900329 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:46:01.900340 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:46:01.900348 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:46:01.900355 kernel: Segment Routing with IPv6 Jan 30 13:46:01.900369 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:46:01.900380 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:46:01.900390 kernel: Key type dns_resolver registered Jan 30 13:46:01.900400 kernel: IPI shorthand broadcast: enabled Jan 30 13:46:01.900411 kernel: sched_clock: Marking stable (595002324, 105255274)->(715126387, -14868789) Jan 30 13:46:01.900421 kernel: registered taskstats version 1 Jan 30 13:46:01.900430 kernel: Loading compiled-in X.509 certificates Jan 30 13:46:01.900440 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:46:01.900463 kernel: Key type .fscrypt registered Jan 30 13:46:01.900476 kernel: Key type fscrypt-provisioning registered Jan 30 13:46:01.900486 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:46:01.900494 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:46:01.900502 kernel: ima: No architecture policies found Jan 30 13:46:01.900510 kernel: clk: Disabling unused clocks Jan 30 13:46:01.900517 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:46:01.900525 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:46:01.900543 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:46:01.900551 kernel: Run /init as init process Jan 30 13:46:01.900568 kernel: with arguments: Jan 30 13:46:01.900576 kernel: /init Jan 30 13:46:01.900583 kernel: with environment: Jan 30 13:46:01.900591 kernel: HOME=/ Jan 30 13:46:01.900598 kernel: TERM=linux Jan 30 13:46:01.900606 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:46:01.900616 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:46:01.900626 systemd[1]: Detected virtualization kvm. Jan 30 13:46:01.900637 systemd[1]: Detected architecture x86-64. Jan 30 13:46:01.900645 systemd[1]: Running in initrd. Jan 30 13:46:01.900652 systemd[1]: No hostname configured, using default hostname. Jan 30 13:46:01.900660 systemd[1]: Hostname set to . Jan 30 13:46:01.900668 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:46:01.900676 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:46:01.900684 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:46:01.900692 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:46:01.900704 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:46:01.900723 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:46:01.900734 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:46:01.900743 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:46:01.900752 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:46:01.900763 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:46:01.900771 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:46:01.900779 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:46:01.900788 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:46:01.900796 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:46:01.900804 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:46:01.900812 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:46:01.900821 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:46:01.900831 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:46:01.900842 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:46:01.900850 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:46:01.900858 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:46:01.900867 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:46:01.900875 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:46:01.900883 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:46:01.900891 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:46:01.900902 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:46:01.900910 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:46:01.900918 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:46:01.900926 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:46:01.900934 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:46:01.900942 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:01.900951 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:46:01.900959 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:46:01.900967 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:46:01.901001 systemd-journald[192]: Collecting audit messages is disabled. Jan 30 13:46:01.901023 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:46:01.901034 systemd-journald[192]: Journal started Jan 30 13:46:01.901055 systemd-journald[192]: Runtime Journal (/run/log/journal/1518a8d8f5d24fce98989b8e82624d05) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:46:01.900595 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:46:01.939198 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:46:01.939225 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:46:01.939241 kernel: Bridge firewalling registered Jan 30 13:46:01.936975 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:46:01.950540 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:46:01.951161 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:01.955858 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:46:01.957096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:46:01.961625 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:46:01.967007 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:46:01.979306 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:46:01.980981 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:46:01.983756 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:01.986326 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:46:01.989916 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:46:01.992144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:46:01.992819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:46:02.006085 dracut-cmdline[225]: dracut-dracut-053 Jan 30 13:46:02.009177 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:46:02.032171 systemd-resolved[226]: Positive Trust Anchors: Jan 30 13:46:02.032187 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:46:02.032218 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:46:02.034697 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 30 13:46:02.035728 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:46:02.041771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:46:02.101164 kernel: SCSI subsystem initialized Jan 30 13:46:02.110151 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:46:02.120143 kernel: iscsi: registered transport (tcp) Jan 30 13:46:02.143452 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:46:02.143484 kernel: QLogic iSCSI HBA Driver Jan 30 13:46:02.186586 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:46:02.219263 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:46:02.242150 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:46:02.242181 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:46:02.242192 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:46:02.282148 kernel: raid6: avx2x4 gen() 30421 MB/s Jan 30 13:46:02.299142 kernel: raid6: avx2x2 gen() 30930 MB/s Jan 30 13:46:02.316210 kernel: raid6: avx2x1 gen() 25989 MB/s Jan 30 13:46:02.316228 kernel: raid6: using algorithm avx2x2 gen() 30930 MB/s Jan 30 13:46:02.335552 kernel: raid6: .... xor() 20002 MB/s, rmw enabled Jan 30 13:46:02.335582 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:46:02.355144 kernel: xor: automatically using best checksumming function avx Jan 30 13:46:02.503155 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:46:02.513487 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:46:02.519279 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:46:02.534807 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jan 30 13:46:02.539268 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:46:02.550269 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:46:02.561952 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 30 13:46:02.589869 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:46:02.599286 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:46:02.658886 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:46:02.667313 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:46:02.677406 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:46:02.681244 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:46:02.684236 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:46:02.687042 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:46:02.696317 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:46:02.703887 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:46:02.704207 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:46:02.704224 kernel: GPT:9289727 != 19775487 Jan 30 13:46:02.704238 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:46:02.704259 kernel: GPT:9289727 != 19775487 Jan 30 13:46:02.704272 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:46:02.704286 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:46:02.697285 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:46:02.707553 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:46:02.709769 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:46:02.722469 kernel: libata version 3.00 loaded. Jan 30 13:46:02.726393 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:46:02.733516 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:46:02.726468 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:46:02.737843 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:46:02.763813 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:46:02.763834 kernel: AES CTR mode by8 optimization enabled Jan 30 13:46:02.763846 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:46:02.763995 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:46:02.764202 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) Jan 30 13:46:02.764219 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (468) Jan 30 13:46:02.764231 kernel: scsi host0: ahci Jan 30 13:46:02.764381 kernel: scsi host1: ahci Jan 30 13:46:02.764531 kernel: scsi host2: ahci Jan 30 13:46:02.764670 kernel: scsi host3: ahci Jan 30 13:46:02.764813 kernel: scsi host4: ahci Jan 30 13:46:02.764949 kernel: scsi host5: ahci Jan 30 13:46:02.765090 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 30 13:46:02.765102 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 30 13:46:02.765112 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 30 13:46:02.765134 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 30 13:46:02.765145 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 30 13:46:02.765156 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 30 13:46:02.728191 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:46:02.729312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:46:02.729366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:02.730644 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:02.746305 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:02.766495 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:46:02.805709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:02.812481 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:46:02.826805 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:46:02.832729 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:46:02.833984 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:46:02.848251 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:46:02.850048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:46:02.894618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:46:03.071273 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:46:03.071352 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:46:03.071366 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:46:03.073044 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:46:03.073150 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:46:03.074146 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:46:03.075162 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:46:03.076349 kernel: ata3.00: applying bridge limits Jan 30 13:46:03.076361 kernel: ata3.00: configured for UDMA/100 Jan 30 13:46:03.077153 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:46:03.124728 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:46:03.136799 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:46:03.136813 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:46:03.159480 disk-uuid[554]: Primary Header is updated. Jan 30 13:46:03.159480 disk-uuid[554]: Secondary Entries is updated. Jan 30 13:46:03.159480 disk-uuid[554]: Secondary Header is updated. Jan 30 13:46:03.164154 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:46:03.168153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:46:04.169896 disk-uuid[575]: The operation has completed successfully. Jan 30 13:46:04.171251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:46:04.198244 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:46:04.198382 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:46:04.223326 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:46:04.227283 sh[591]: Success Jan 30 13:46:04.246224 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:46:04.285756 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:46:04.305685 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:46:04.311232 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:46:04.320484 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:46:04.320512 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:46:04.320523 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:46:04.321520 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:46:04.322253 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:46:04.327196 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:46:04.329603 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:46:04.344314 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:46:04.346394 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:46:04.355711 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:46:04.355746 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:46:04.355758 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:46:04.359157 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:46:04.368674 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:46:04.371181 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:46:04.380810 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:46:04.390666 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:46:04.446029 ignition[680]: Ignition 2.19.0 Jan 30 13:46:04.446041 ignition[680]: Stage: fetch-offline Jan 30 13:46:04.446075 ignition[680]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:04.446085 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:04.446207 ignition[680]: parsed url from cmdline: "" Jan 30 13:46:04.446212 ignition[680]: no config URL provided Jan 30 13:46:04.446218 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:46:04.446228 ignition[680]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:46:04.446255 ignition[680]: op(1): [started] loading QEMU firmware config module Jan 30 13:46:04.446261 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:46:04.457231 ignition[680]: op(1): [finished] loading QEMU firmware config module Jan 30 13:46:04.475322 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:46:04.488281 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:46:04.509937 ignition[680]: parsing config with SHA512: 73745b517022915a319393fc9b04d036aae46f3a8ffa6910ee5089898c4dbebcb7c07e1e32b91879fb23e029dee72d03f7342e68142b1a037004a9f369d4c5bb Jan 30 13:46:04.509940 systemd-networkd[780]: lo: Link UP Jan 30 13:46:04.509946 systemd-networkd[780]: lo: Gained carrier Jan 30 13:46:04.512902 systemd-networkd[780]: Enumeration completed Jan 30 13:46:04.513281 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:46:04.513606 systemd[1]: Reached target network.target - Network. Jan 30 13:46:04.513967 unknown[680]: fetched base config from "system" Jan 30 13:46:04.515258 ignition[680]: fetch-offline: fetch-offline passed Jan 30 13:46:04.514175 unknown[680]: fetched user config from "qemu" Jan 30 13:46:04.515347 ignition[680]: Ignition finished successfully Jan 30 13:46:04.517803 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:46:04.518715 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:46:04.522267 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:46:04.526244 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:46:04.526249 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:46:04.530241 systemd-networkd[780]: eth0: Link UP Jan 30 13:46:04.530250 systemd-networkd[780]: eth0: Gained carrier Jan 30 13:46:04.530256 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:46:04.542207 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:46:04.546060 ignition[784]: Ignition 2.19.0 Jan 30 13:46:04.546075 ignition[784]: Stage: kargs Jan 30 13:46:04.546310 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:04.546325 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:04.547399 ignition[784]: kargs: kargs passed Jan 30 13:46:04.547446 ignition[784]: Ignition finished successfully Jan 30 13:46:04.554789 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:46:04.566257 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:46:04.578754 ignition[792]: Ignition 2.19.0 Jan 30 13:46:04.578766 ignition[792]: Stage: disks Jan 30 13:46:04.578945 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:04.578958 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:04.579917 ignition[792]: disks: disks passed Jan 30 13:46:04.582209 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:46:04.579972 ignition[792]: Ignition finished successfully Jan 30 13:46:04.584103 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:46:04.585974 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:46:04.587270 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:46:04.589005 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:46:04.590060 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:46:04.601363 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:46:04.613397 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:46:04.620715 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:46:04.639203 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:46:04.722158 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:46:04.723068 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:46:04.724119 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:46:04.737204 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:46:04.739234 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:46:04.740624 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:46:04.740671 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:46:04.751650 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Jan 30 13:46:04.751673 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:46:04.751684 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:46:04.751695 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:46:04.740697 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:46:04.754800 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:46:04.747662 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:46:04.752880 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:46:04.756677 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:46:04.788521 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:46:04.792790 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:46:04.797521 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:46:04.800860 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:46:04.879620 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:46:04.892214 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:46:04.893332 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:46:04.901144 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:46:04.918087 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:46:04.921536 ignition[924]: INFO : Ignition 2.19.0 Jan 30 13:46:04.921536 ignition[924]: INFO : Stage: mount Jan 30 13:46:04.923090 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:04.923090 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:04.923090 ignition[924]: INFO : mount: mount passed Jan 30 13:46:04.923090 ignition[924]: INFO : Ignition finished successfully Jan 30 13:46:04.928299 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:46:04.938239 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:46:05.319841 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:46:05.331279 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:46:05.337168 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Jan 30 13:46:05.339200 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:46:05.339222 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:46:05.339233 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:46:05.342149 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:46:05.343598 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:46:05.362498 ignition[954]: INFO : Ignition 2.19.0 Jan 30 13:46:05.362498 ignition[954]: INFO : Stage: files Jan 30 13:46:05.364210 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:05.364210 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:05.364210 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:46:05.368026 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:46:05.368026 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:46:05.372436 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:46:05.373853 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:46:05.375491 unknown[954]: wrote ssh authorized keys file for user: core Jan 30 13:46:05.376596 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:46:05.378631 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:46:05.380547 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 13:46:05.423444 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:46:05.514141 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:46:05.516361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:46:06.013793 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:46:06.168402 systemd-networkd[780]: eth0: Gained IPv6LL Jan 30 13:46:06.397344 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:46:06.397344 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:46:06.401476 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:46:06.404059 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:46:06.404059 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:46:06.404059 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:46:06.409009 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:46:06.411281 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:46:06.411281 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:46:06.414876 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:46:06.439250 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:46:06.444287 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:46:06.446241 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:46:06.446241 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:46:06.446241 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:46:06.446241 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:46:06.446241 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:46:06.446241 ignition[954]: INFO : files: files passed Jan 30 13:46:06.446241 ignition[954]: INFO : Ignition finished successfully Jan 30 13:46:06.459688 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:46:06.467268 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:46:06.469105 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:46:06.471264 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:46:06.471379 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:46:06.477627 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:46:06.480176 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:46:06.480176 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:46:06.483470 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:46:06.487242 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:46:06.487676 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:46:06.495303 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:46:06.519650 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:46:06.520729 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:46:06.523361 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:46:06.525397 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:46:06.527396 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:46:06.535241 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:46:06.550343 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:46:06.569285 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:46:06.577644 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:46:06.580105 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:46:06.582514 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:46:06.584335 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:46:06.585347 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:46:06.587870 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:46:06.589908 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:46:06.591747 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:46:06.593917 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:46:06.596247 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:46:06.598495 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:46:06.600559 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:46:06.603022 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:46:06.605090 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:46:06.607144 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:46:06.608773 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:46:06.609791 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:46:06.612051 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:46:06.614235 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:46:06.616593 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:46:06.617571 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:46:06.620724 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:46:06.621748 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:46:06.623981 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:46:06.625079 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:46:06.627471 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:46:06.629227 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:46:06.630305 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:46:06.633018 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:46:06.634853 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:46:06.636719 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:46:06.637627 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:46:06.639587 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:46:06.640504 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:46:06.642576 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:46:06.643765 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:46:06.646285 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:46:06.647281 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:46:06.656264 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:46:06.658146 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:46:06.659235 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:46:06.662481 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:46:06.663386 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:46:06.664375 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:46:06.669785 ignition[1008]: INFO : Ignition 2.19.0 Jan 30 13:46:06.669785 ignition[1008]: INFO : Stage: umount Jan 30 13:46:06.669785 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:46:06.669785 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:46:06.669785 ignition[1008]: INFO : umount: umount passed Jan 30 13:46:06.669785 ignition[1008]: INFO : Ignition finished successfully Jan 30 13:46:06.667350 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:46:06.667451 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:46:06.678754 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:46:06.679818 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:46:06.683564 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:46:06.684599 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:46:06.687593 systemd[1]: Stopped target network.target - Network. Jan 30 13:46:06.689330 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:46:06.689398 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:46:06.692430 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:46:06.693333 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:46:06.695396 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:46:06.696292 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:46:06.698178 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:46:06.699172 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:46:06.701496 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:46:06.703937 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:46:06.705177 systemd-networkd[780]: eth0: DHCPv6 lease lost Jan 30 13:46:06.708196 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:46:06.709887 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:46:06.711134 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:46:06.714153 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:46:06.715398 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:46:06.720415 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:46:06.720482 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:46:06.731295 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:46:06.731606 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:46:06.731682 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:46:06.733952 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:46:06.734001 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:06.736539 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:46:06.736589 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:46:06.738893 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:46:06.738941 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:46:06.741156 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:46:06.758081 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:46:06.759187 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:46:06.761694 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:46:06.762785 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:46:06.766677 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:46:06.767797 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:46:06.770039 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:46:06.770088 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:46:06.773305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:46:06.773368 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:46:06.776372 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:46:06.777286 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:46:06.779450 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:46:06.779519 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:46:06.796346 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:46:06.796851 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:46:06.796929 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:46:06.797429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:46:06.797492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:06.804181 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:46:06.804336 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:46:06.910874 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:46:06.911016 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:46:06.913377 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:46:06.914350 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:46:06.914402 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:46:06.925256 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:46:06.932364 systemd[1]: Switching root. Jan 30 13:46:06.959389 systemd-journald[192]: Journal stopped Jan 30 13:46:08.200845 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 30 13:46:08.200915 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:46:08.200932 kernel: SELinux: policy capability open_perms=1 Jan 30 13:46:08.200948 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:46:08.200960 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:46:08.200976 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:46:08.200987 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:46:08.200998 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:46:08.201009 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:46:08.201023 kernel: audit: type=1403 audit(1738244767.474:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:46:08.201036 systemd[1]: Successfully loaded SELinux policy in 43.731ms. Jan 30 13:46:08.201055 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.917ms. Jan 30 13:46:08.201068 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:46:08.201081 systemd[1]: Detected virtualization kvm. Jan 30 13:46:08.201093 systemd[1]: Detected architecture x86-64. Jan 30 13:46:08.201105 systemd[1]: Detected first boot. Jan 30 13:46:08.201117 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:46:08.201145 zram_generator::config[1052]: No configuration found. Jan 30 13:46:08.201169 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:46:08.201185 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:46:08.201199 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:46:08.201212 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:46:08.201225 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:46:08.201237 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:46:08.201249 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:46:08.201260 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:46:08.201275 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:46:08.201295 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:46:08.201308 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:46:08.201320 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:46:08.201332 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:46:08.201344 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:46:08.201356 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:46:08.201368 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:46:08.201381 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:46:08.201400 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:46:08.201415 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:46:08.201431 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:46:08.201445 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:46:08.201456 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:46:08.201468 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:46:08.201480 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:46:08.201495 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:46:08.201507 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:46:08.201519 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:46:08.201531 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:46:08.201543 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:46:08.201554 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:46:08.201566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:46:08.201579 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:46:08.201595 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:46:08.201610 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:46:08.201628 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:46:08.201644 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:46:08.201657 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:46:08.201671 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:08.201683 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:46:08.201695 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:46:08.201711 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:46:08.201723 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:46:08.201737 systemd[1]: Reached target machines.target - Containers. Jan 30 13:46:08.201750 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:46:08.201763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:08.201778 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:46:08.201794 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:46:08.201807 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:08.201818 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:46:08.201830 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:08.201842 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:46:08.201856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:08.201868 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:46:08.201880 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:46:08.201892 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:46:08.201904 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:46:08.201916 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:46:08.201928 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:46:08.201940 kernel: fuse: init (API version 7.39) Jan 30 13:46:08.201952 kernel: loop: module loaded Jan 30 13:46:08.201966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:46:08.201978 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:46:08.201990 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:46:08.202001 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:46:08.202031 systemd-journald[1119]: Collecting audit messages is disabled. Jan 30 13:46:08.202057 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:46:08.202069 systemd[1]: Stopped verity-setup.service. Jan 30 13:46:08.202083 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:08.202095 systemd-journald[1119]: Journal started Jan 30 13:46:08.202116 systemd-journald[1119]: Runtime Journal (/run/log/journal/1518a8d8f5d24fce98989b8e82624d05) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:46:07.982931 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:46:07.998770 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:46:07.999240 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:46:08.205745 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:46:08.206832 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:46:08.208146 kernel: ACPI: bus type drm_connector registered Jan 30 13:46:08.208769 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:46:08.209980 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:46:08.211073 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:46:08.212335 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:46:08.213598 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:46:08.214836 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:46:08.216328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:46:08.217958 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:46:08.218141 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:46:08.219677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:08.219845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:08.221388 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:46:08.221553 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:46:08.222909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:08.223072 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:08.224575 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:46:08.224741 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:46:08.226161 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:08.226336 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:08.227705 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:46:08.229073 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:46:08.230581 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:46:08.245643 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:46:08.255247 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:46:08.257574 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:46:08.258721 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:46:08.258752 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:46:08.261016 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:46:08.263452 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:46:08.265654 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:46:08.266800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:08.269249 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:46:08.272469 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:46:08.273765 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:46:08.274993 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:46:08.276290 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:46:08.281047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:46:08.284540 systemd-journald[1119]: Time spent on flushing to /var/log/journal/1518a8d8f5d24fce98989b8e82624d05 is 27.659ms for 948 entries. Jan 30 13:46:08.284540 systemd-journald[1119]: System Journal (/var/log/journal/1518a8d8f5d24fce98989b8e82624d05) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:46:08.476101 systemd-journald[1119]: Received client request to flush runtime journal. Jan 30 13:46:08.476191 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:46:08.476226 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:46:08.476248 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 13:46:08.285835 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:46:08.288113 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:46:08.292946 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:46:08.294568 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:46:08.296062 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:46:08.324548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:46:08.329860 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:46:08.343252 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:46:08.356094 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:46:08.364643 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:46:08.374136 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:46:08.393337 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 30 13:46:08.393351 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 30 13:46:08.394029 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:46:08.395669 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:46:08.407742 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:46:08.410383 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:46:08.478003 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:46:08.482883 kernel: loop2: detected capacity change from 0 to 218376 Jan 30 13:46:08.492565 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:46:08.493413 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:46:08.521177 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 13:46:08.534166 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:46:08.544164 kernel: loop5: detected capacity change from 0 to 218376 Jan 30 13:46:08.549299 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:46:08.549968 (sd-merge)[1192]: Merged extensions into '/usr'. Jan 30 13:46:08.555517 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:46:08.555659 systemd[1]: Reloading... Jan 30 13:46:08.617197 zram_generator::config[1215]: No configuration found. Jan 30 13:46:08.711221 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:46:08.759712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:08.808718 systemd[1]: Reloading finished in 252 ms. Jan 30 13:46:08.848058 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:46:08.849533 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:46:08.861347 systemd[1]: Starting ensure-sysext.service... Jan 30 13:46:08.863564 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:46:08.871861 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:46:08.871881 systemd[1]: Reloading... Jan 30 13:46:08.888854 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:46:08.889282 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:46:08.890297 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:46:08.890596 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 30 13:46:08.890695 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 30 13:46:08.894497 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:46:08.894510 systemd-tmpfiles[1256]: Skipping /boot Jan 30 13:46:08.912061 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:46:08.912933 systemd-tmpfiles[1256]: Skipping /boot Jan 30 13:46:08.936160 zram_generator::config[1287]: No configuration found. Jan 30 13:46:09.029089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:09.078767 systemd[1]: Reloading finished in 206 ms. Jan 30 13:46:09.097651 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:46:09.107689 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:46:09.117792 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:46:09.120403 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:46:09.123278 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:46:09.127358 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:46:09.134247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:46:09.137324 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:46:09.141786 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:09.141954 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:09.144843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:09.152745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:09.158164 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:09.159684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:09.164221 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:46:09.165478 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:09.166639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:09.166867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:09.168921 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:46:09.170951 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:09.171163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:09.173064 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 30 13:46:09.173405 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:09.173600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:09.183340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:09.184571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:09.192434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:09.193805 augenrules[1352]: No rules Jan 30 13:46:09.196303 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:09.203212 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:09.204842 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:09.206809 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:46:09.208169 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:09.208945 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:46:09.213386 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:46:09.215724 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:46:09.217578 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:09.217764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:09.219629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:09.219871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:09.222491 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:46:09.230825 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:46:09.234460 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:09.234740 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:09.250409 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:46:09.264011 systemd[1]: Finished ensure-sysext.service. Jan 30 13:46:09.272494 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:46:09.272787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:09.273332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:46:09.280352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:46:09.286827 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:46:09.291278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:46:09.294196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1367) Jan 30 13:46:09.296324 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:46:09.298564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:46:09.305335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:46:09.310697 systemd-resolved[1326]: Positive Trust Anchors: Jan 30 13:46:09.310733 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:46:09.311090 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:46:09.311180 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:46:09.312778 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:46:09.312818 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:46:09.313651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:46:09.313881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:46:09.315978 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:46:09.316219 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:46:09.316523 systemd-resolved[1326]: Defaulting to hostname 'linux'. Jan 30 13:46:09.318379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:46:09.318620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:46:09.320314 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:46:09.321984 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:46:09.322164 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:46:09.340154 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:46:09.345154 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:46:09.355336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:46:09.356807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:46:09.370636 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:46:09.371901 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:46:09.372081 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:46:09.369818 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:46:09.373222 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:46:09.373309 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:46:09.377168 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:46:09.401576 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:46:09.409997 systemd-networkd[1402]: lo: Link UP Jan 30 13:46:09.410320 systemd-networkd[1402]: lo: Gained carrier Jan 30 13:46:09.411897 systemd-networkd[1402]: Enumeration completed Jan 30 13:46:09.412025 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:46:09.413073 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:46:09.413154 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:46:09.414396 systemd-networkd[1402]: eth0: Link UP Jan 30 13:46:09.414449 systemd-networkd[1402]: eth0: Gained carrier Jan 30 13:46:09.414495 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:46:09.415340 systemd[1]: Reached target network.target - Network. Jan 30 13:46:09.425176 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.69/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:46:09.425307 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:46:09.459782 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:46:09.480149 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:46:09.487885 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:46:09.489470 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:46:10.028820 systemd-resolved[1326]: Clock change detected. Flushing caches. Jan 30 13:46:10.030161 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:46:10.030234 systemd-timesyncd[1403]: Initial clock synchronization to Thu 2025-01-30 13:46:10.028577 UTC. Jan 30 13:46:10.031854 kernel: kvm_amd: TSC scaling supported Jan 30 13:46:10.031902 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:46:10.031915 kernel: kvm_amd: Nested Paging enabled Jan 30 13:46:10.032637 kernel: kvm_amd: LBR virtualization supported Jan 30 13:46:10.032680 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:46:10.033302 kernel: kvm_amd: Virtual GIF supported Jan 30 13:46:10.055020 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:46:10.083242 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:46:10.105228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:46:10.122301 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:46:10.130659 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:46:10.174395 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:46:10.175979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:46:10.177142 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:46:10.178328 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:46:10.179609 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:46:10.181084 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:46:10.182329 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:46:10.183751 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:46:10.185012 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:46:10.185040 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:46:10.185958 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:46:10.187485 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:46:10.190210 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:46:10.199866 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:46:10.202781 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:46:10.204522 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:46:10.205732 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:46:10.206747 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:46:10.207778 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:46:10.207815 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:46:10.208923 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:46:10.211444 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:46:10.216132 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:46:10.219201 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:46:10.222161 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:46:10.221531 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:46:10.224551 jq[1433]: false Jan 30 13:46:10.226723 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:46:10.232905 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:46:10.238058 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:46:10.238776 extend-filesystems[1434]: Found loop3 Jan 30 13:46:10.238776 extend-filesystems[1434]: Found loop4 Jan 30 13:46:10.238776 extend-filesystems[1434]: Found loop5 Jan 30 13:46:10.238776 extend-filesystems[1434]: Found sr0 Jan 30 13:46:10.238776 extend-filesystems[1434]: Found vda Jan 30 13:46:10.238776 extend-filesystems[1434]: Found vda1 Jan 30 13:46:10.238776 extend-filesystems[1434]: Found vda2 Jan 30 13:46:10.238776 extend-filesystems[1434]: Found vda3 Jan 30 13:46:10.238776 extend-filesystems[1434]: Found usr Jan 30 13:46:10.238776 extend-filesystems[1434]: Found vda4 Jan 30 13:46:10.238776 extend-filesystems[1434]: Found vda6 Jan 30 13:46:10.238776 extend-filesystems[1434]: Found vda7 Jan 30 13:46:10.238776 extend-filesystems[1434]: Found vda9 Jan 30 13:46:10.252768 dbus-daemon[1432]: [system] SELinux support is enabled Jan 30 13:46:10.244081 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:46:10.259491 extend-filesystems[1434]: Checking size of /dev/vda9 Jan 30 13:46:10.259491 extend-filesystems[1434]: Resized partition /dev/vda9 Jan 30 13:46:10.262594 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:46:10.250579 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:46:10.262762 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:46:10.271756 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1368) Jan 30 13:46:10.257946 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:46:10.258529 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:46:10.267959 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:46:10.275746 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:46:10.278683 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:46:10.282022 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:46:10.289511 jq[1456]: true Jan 30 13:46:10.295714 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:46:10.322693 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:46:10.322725 update_engine[1452]: I20250130 13:46:10.307628 1452 main.cc:92] Flatcar Update Engine starting Jan 30 13:46:10.322725 update_engine[1452]: I20250130 13:46:10.313900 1452 update_check_scheduler.cc:74] Next update check in 10m56s Jan 30 13:46:10.296018 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:46:10.296473 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:46:10.296730 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:46:10.323345 jq[1459]: true Jan 30 13:46:10.301944 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:46:10.302237 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:46:10.320269 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:46:10.328506 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:46:10.328506 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:46:10.328506 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:46:10.332467 extend-filesystems[1434]: Resized filesystem in /dev/vda9 Jan 30 13:46:10.329885 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:46:10.331990 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:46:10.342870 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:46:10.342900 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:46:10.345870 systemd-logind[1449]: New seat seat0. Jan 30 13:46:10.350310 tar[1458]: linux-amd64/LICENSE Jan 30 13:46:10.350584 tar[1458]: linux-amd64/helm Jan 30 13:46:10.353810 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:46:10.359517 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:46:10.361517 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:46:10.361673 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:46:10.363297 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:46:10.363430 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:46:10.372240 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:46:10.406932 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:46:10.410559 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:46:10.410849 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:46:10.413656 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:46:10.523427 containerd[1460]: time="2025-01-30T13:46:10.523337502Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:46:10.549421 containerd[1460]: time="2025-01-30T13:46:10.549292968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:10.551130 containerd[1460]: time="2025-01-30T13:46:10.551079418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:10.551625 containerd[1460]: time="2025-01-30T13:46:10.551189274Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:46:10.551625 containerd[1460]: time="2025-01-30T13:46:10.551225432Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:46:10.551625 containerd[1460]: time="2025-01-30T13:46:10.551436057Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:46:10.551625 containerd[1460]: time="2025-01-30T13:46:10.551451235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:10.551625 containerd[1460]: time="2025-01-30T13:46:10.551534191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:10.551625 containerd[1460]: time="2025-01-30T13:46:10.551554128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:10.551939 containerd[1460]: time="2025-01-30T13:46:10.551919754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:10.552013 containerd[1460]: time="2025-01-30T13:46:10.551984656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:10.552064 containerd[1460]: time="2025-01-30T13:46:10.552050830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:10.552114 containerd[1460]: time="2025-01-30T13:46:10.552101575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:10.552265 containerd[1460]: time="2025-01-30T13:46:10.552249472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:10.552553 containerd[1460]: time="2025-01-30T13:46:10.552536951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:46:10.552722 containerd[1460]: time="2025-01-30T13:46:10.552705748Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:46:10.552769 containerd[1460]: time="2025-01-30T13:46:10.552758146Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:46:10.552909 containerd[1460]: time="2025-01-30T13:46:10.552894822Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:46:10.553025 containerd[1460]: time="2025-01-30T13:46:10.553010940Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:46:10.559787 containerd[1460]: time="2025-01-30T13:46:10.559767912Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:46:10.559870 containerd[1460]: time="2025-01-30T13:46:10.559857189Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:46:10.559948 containerd[1460]: time="2025-01-30T13:46:10.559934635Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:46:10.560025 containerd[1460]: time="2025-01-30T13:46:10.560011589Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:46:10.560086 containerd[1460]: time="2025-01-30T13:46:10.560072984Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:46:10.560306 containerd[1460]: time="2025-01-30T13:46:10.560288959Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560587769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560699629Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560713986Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560726580Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560739664Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560751136Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560763018Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560775642Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560789227Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560811138Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560823532Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560837027Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560855902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561344 containerd[1460]: time="2025-01-30T13:46:10.560868997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.560880829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.560893072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.560914953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.560927186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.560937826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.560949778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.560962031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.560980095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.561005723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.561017235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.561029388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.561043895Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.561061718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.561074172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561596 containerd[1460]: time="2025-01-30T13:46:10.561085052Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:46:10.561857 containerd[1460]: time="2025-01-30T13:46:10.561135196Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:46:10.561857 containerd[1460]: time="2025-01-30T13:46:10.561151026Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:46:10.561857 containerd[1460]: time="2025-01-30T13:46:10.561161275Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:46:10.561857 containerd[1460]: time="2025-01-30T13:46:10.561172185Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:46:10.561857 containerd[1460]: time="2025-01-30T13:46:10.561181813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.561857 containerd[1460]: time="2025-01-30T13:46:10.561193145Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:46:10.561857 containerd[1460]: time="2025-01-30T13:46:10.561203033Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:46:10.561857 containerd[1460]: time="2025-01-30T13:46:10.561212331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:46:10.562029 containerd[1460]: time="2025-01-30T13:46:10.561561295Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:46:10.562029 containerd[1460]: time="2025-01-30T13:46:10.561656103Z" level=info msg="Connect containerd service" Jan 30 13:46:10.562029 containerd[1460]: time="2025-01-30T13:46:10.561718109Z" level=info msg="using legacy CRI server" Jan 30 13:46:10.562029 containerd[1460]: time="2025-01-30T13:46:10.561727306Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:46:10.562029 containerd[1460]: time="2025-01-30T13:46:10.561828707Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:46:10.562500 containerd[1460]: time="2025-01-30T13:46:10.562466032Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:46:10.562700 containerd[1460]: time="2025-01-30T13:46:10.562633917Z" level=info msg="Start subscribing containerd event" Jan 30 13:46:10.562841 containerd[1460]: time="2025-01-30T13:46:10.562685443Z" level=info msg="Start recovering state" Jan 30 13:46:10.562841 containerd[1460]: time="2025-01-30T13:46:10.562810358Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:46:10.562886 containerd[1460]: time="2025-01-30T13:46:10.562869659Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:46:10.565740 containerd[1460]: time="2025-01-30T13:46:10.564039994Z" level=info msg="Start event monitor" Jan 30 13:46:10.565740 containerd[1460]: time="2025-01-30T13:46:10.564135653Z" level=info msg="Start snapshots syncer" Jan 30 13:46:10.565740 containerd[1460]: time="2025-01-30T13:46:10.564169877Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:46:10.565740 containerd[1460]: time="2025-01-30T13:46:10.564184695Z" level=info msg="Start streaming server" Jan 30 13:46:10.565740 containerd[1460]: time="2025-01-30T13:46:10.564257982Z" level=info msg="containerd successfully booted in 0.041882s" Jan 30 13:46:10.564504 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:46:10.616884 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:46:10.640673 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:46:10.649507 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:46:10.656584 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:46:10.656790 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:46:10.660738 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:46:10.687641 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:46:10.696365 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:46:10.698720 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:46:10.699962 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:46:10.789132 tar[1458]: linux-amd64/README.md Jan 30 13:46:10.805505 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:46:11.569151 systemd-networkd[1402]: eth0: Gained IPv6LL Jan 30 13:46:11.572935 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:46:11.575099 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:46:11.586274 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:46:11.589300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:11.591934 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:46:11.614709 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:46:11.614944 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:46:11.616667 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:46:11.619292 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:46:12.296441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:12.298301 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:46:12.299811 systemd[1]: Startup finished in 727ms (kernel) + 5.778s (initrd) + 4.330s (userspace) = 10.836s. Jan 30 13:46:12.319409 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:46:12.754624 kubelet[1545]: E0130 13:46:12.754482 1545 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:46:12.758594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:46:12.758831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:46:12.759186 systemd[1]: kubelet.service: Consumed 1.032s CPU time. Jan 30 13:46:20.125103 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:46:20.126313 systemd[1]: Started sshd@0-10.0.0.69:22-10.0.0.1:56306.service - OpenSSH per-connection server daemon (10.0.0.1:56306). Jan 30 13:46:20.181469 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 56306 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:20.183717 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:20.193591 systemd-logind[1449]: New session 1 of user core. Jan 30 13:46:20.195064 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:46:20.211322 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:46:20.225774 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:46:20.228585 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:46:20.238679 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:46:20.359493 systemd[1563]: Queued start job for default target default.target. Jan 30 13:46:20.369361 systemd[1563]: Created slice app.slice - User Application Slice. Jan 30 13:46:20.369386 systemd[1563]: Reached target paths.target - Paths. Jan 30 13:46:20.369399 systemd[1563]: Reached target timers.target - Timers. Jan 30 13:46:20.370885 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:46:20.382840 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:46:20.383028 systemd[1563]: Reached target sockets.target - Sockets. Jan 30 13:46:20.383053 systemd[1563]: Reached target basic.target - Basic System. Jan 30 13:46:20.383100 systemd[1563]: Reached target default.target - Main User Target. Jan 30 13:46:20.383144 systemd[1563]: Startup finished in 136ms. Jan 30 13:46:20.383455 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:46:20.385117 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:46:20.446793 systemd[1]: Started sshd@1-10.0.0.69:22-10.0.0.1:56318.service - OpenSSH per-connection server daemon (10.0.0.1:56318). Jan 30 13:46:20.490009 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 56318 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:20.491599 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:20.495664 systemd-logind[1449]: New session 2 of user core. Jan 30 13:46:20.505125 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:46:20.559034 sshd[1574]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:20.574781 systemd[1]: sshd@1-10.0.0.69:22-10.0.0.1:56318.service: Deactivated successfully. Jan 30 13:46:20.576523 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:46:20.578095 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:46:20.590212 systemd[1]: Started sshd@2-10.0.0.69:22-10.0.0.1:56326.service - OpenSSH per-connection server daemon (10.0.0.1:56326). Jan 30 13:46:20.591204 systemd-logind[1449]: Removed session 2. Jan 30 13:46:20.623499 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 56326 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:20.624930 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:20.629258 systemd-logind[1449]: New session 3 of user core. Jan 30 13:46:20.640154 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:46:20.690602 sshd[1581]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:20.709960 systemd[1]: sshd@2-10.0.0.69:22-10.0.0.1:56326.service: Deactivated successfully. Jan 30 13:46:20.711862 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:46:20.713652 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:46:20.725379 systemd[1]: Started sshd@3-10.0.0.69:22-10.0.0.1:56330.service - OpenSSH per-connection server daemon (10.0.0.1:56330). Jan 30 13:46:20.726334 systemd-logind[1449]: Removed session 3. Jan 30 13:46:20.758641 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 56330 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:20.760408 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:20.764462 systemd-logind[1449]: New session 4 of user core. Jan 30 13:46:20.782343 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:46:20.837735 sshd[1588]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:20.856864 systemd[1]: sshd@3-10.0.0.69:22-10.0.0.1:56330.service: Deactivated successfully. Jan 30 13:46:20.858466 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:46:20.860083 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:46:20.861277 systemd[1]: Started sshd@4-10.0.0.69:22-10.0.0.1:56342.service - OpenSSH per-connection server daemon (10.0.0.1:56342). Jan 30 13:46:20.862098 systemd-logind[1449]: Removed session 4. Jan 30 13:46:20.899169 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 56342 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:20.900725 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:20.904862 systemd-logind[1449]: New session 5 of user core. Jan 30 13:46:20.919201 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:46:21.211066 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:46:21.211448 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:21.228724 sudo[1598]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:21.230925 sshd[1595]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:21.245275 systemd[1]: sshd@4-10.0.0.69:22-10.0.0.1:56342.service: Deactivated successfully. Jan 30 13:46:21.247328 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:46:21.248874 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:46:21.258517 systemd[1]: Started sshd@5-10.0.0.69:22-10.0.0.1:56348.service - OpenSSH per-connection server daemon (10.0.0.1:56348). Jan 30 13:46:21.259552 systemd-logind[1449]: Removed session 5. Jan 30 13:46:21.291734 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 56348 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:21.293401 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:21.297397 systemd-logind[1449]: New session 6 of user core. Jan 30 13:46:21.314239 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:46:21.368213 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:46:21.368612 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:21.372451 sudo[1608]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:21.379048 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:46:21.379395 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:21.402335 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:46:21.404398 auditctl[1611]: No rules Jan 30 13:46:21.405759 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:46:21.406068 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:46:21.407958 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:46:21.438287 augenrules[1629]: No rules Jan 30 13:46:21.440502 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:46:21.441867 sudo[1607]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:21.443619 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:21.456469 systemd[1]: sshd@5-10.0.0.69:22-10.0.0.1:56348.service: Deactivated successfully. Jan 30 13:46:21.458823 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:46:21.460974 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:46:21.472401 systemd[1]: Started sshd@6-10.0.0.69:22-10.0.0.1:56356.service - OpenSSH per-connection server daemon (10.0.0.1:56356). Jan 30 13:46:21.473765 systemd-logind[1449]: Removed session 6. Jan 30 13:46:21.507905 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 56356 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:46:21.509737 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:46:21.514284 systemd-logind[1449]: New session 7 of user core. Jan 30 13:46:21.522185 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:46:21.575969 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:46:21.576325 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:46:21.873225 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:46:21.874520 (dockerd)[1658]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:46:22.147459 dockerd[1658]: time="2025-01-30T13:46:22.147306423Z" level=info msg="Starting up" Jan 30 13:46:22.257897 dockerd[1658]: time="2025-01-30T13:46:22.257838480Z" level=info msg="Loading containers: start." Jan 30 13:46:22.356016 kernel: Initializing XFRM netlink socket Jan 30 13:46:22.434091 systemd-networkd[1402]: docker0: Link UP Jan 30 13:46:22.456735 dockerd[1658]: time="2025-01-30T13:46:22.456673156Z" level=info msg="Loading containers: done." Jan 30 13:46:22.471075 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2469457201-merged.mount: Deactivated successfully. Jan 30 13:46:22.473108 dockerd[1658]: time="2025-01-30T13:46:22.473066499Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:46:22.473208 dockerd[1658]: time="2025-01-30T13:46:22.473186504Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:46:22.473335 dockerd[1658]: time="2025-01-30T13:46:22.473308052Z" level=info msg="Daemon has completed initialization" Jan 30 13:46:22.511453 dockerd[1658]: time="2025-01-30T13:46:22.511357048Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:46:22.511737 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:46:22.999817 containerd[1460]: time="2025-01-30T13:46:22.999776989Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 13:46:23.008959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:46:23.020159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:23.176544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:23.180800 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:46:23.218081 kubelet[1815]: E0130 13:46:23.218037 1815 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:46:23.224237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:46:23.224478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:46:24.160507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097749283.mount: Deactivated successfully. Jan 30 13:46:25.025118 containerd[1460]: time="2025-01-30T13:46:25.025057744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:25.025832 containerd[1460]: time="2025-01-30T13:46:25.025766573Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 30 13:46:25.026952 containerd[1460]: time="2025-01-30T13:46:25.026914065Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:25.029667 containerd[1460]: time="2025-01-30T13:46:25.029613027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:25.030752 containerd[1460]: time="2025-01-30T13:46:25.030700486Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 2.030882259s" Jan 30 13:46:25.030814 containerd[1460]: time="2025-01-30T13:46:25.030755489Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 13:46:25.031384 containerd[1460]: time="2025-01-30T13:46:25.031351336Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 13:46:26.253410 containerd[1460]: time="2025-01-30T13:46:26.253344350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:26.254316 containerd[1460]: time="2025-01-30T13:46:26.254250369Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 30 13:46:26.255704 containerd[1460]: time="2025-01-30T13:46:26.255626710Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:26.258367 containerd[1460]: time="2025-01-30T13:46:26.258336411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:26.259412 containerd[1460]: time="2025-01-30T13:46:26.259356905Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.227970333s" Jan 30 13:46:26.259412 containerd[1460]: time="2025-01-30T13:46:26.259407751Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 13:46:26.259905 containerd[1460]: time="2025-01-30T13:46:26.259879215Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 13:46:28.154851 containerd[1460]: time="2025-01-30T13:46:28.154785290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:28.156755 containerd[1460]: time="2025-01-30T13:46:28.156664053Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 30 13:46:28.158453 containerd[1460]: time="2025-01-30T13:46:28.158344044Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:28.163310 containerd[1460]: time="2025-01-30T13:46:28.163259432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:28.164104 containerd[1460]: time="2025-01-30T13:46:28.164075743Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.904167234s" Jan 30 13:46:28.164146 containerd[1460]: time="2025-01-30T13:46:28.164105439Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 13:46:28.164605 containerd[1460]: time="2025-01-30T13:46:28.164581682Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:46:29.170200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959854219.mount: Deactivated successfully. Jan 30 13:46:29.695088 containerd[1460]: time="2025-01-30T13:46:29.695020441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:29.741338 containerd[1460]: time="2025-01-30T13:46:29.741288480Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 13:46:29.781794 containerd[1460]: time="2025-01-30T13:46:29.781749889Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:29.811071 containerd[1460]: time="2025-01-30T13:46:29.809129646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:29.811071 containerd[1460]: time="2025-01-30T13:46:29.810200054Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.64559072s" Jan 30 13:46:29.811071 containerd[1460]: time="2025-01-30T13:46:29.810229519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:46:29.811585 containerd[1460]: time="2025-01-30T13:46:29.811540007Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 13:46:30.311693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1557439407.mount: Deactivated successfully. Jan 30 13:46:31.398049 containerd[1460]: time="2025-01-30T13:46:31.397966172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:31.398820 containerd[1460]: time="2025-01-30T13:46:31.398787592Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 30 13:46:31.399830 containerd[1460]: time="2025-01-30T13:46:31.399802025Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:31.402390 containerd[1460]: time="2025-01-30T13:46:31.402359952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:31.403377 containerd[1460]: time="2025-01-30T13:46:31.403345219Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.591749198s" Jan 30 13:46:31.403377 containerd[1460]: time="2025-01-30T13:46:31.403375466Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 13:46:31.403787 containerd[1460]: time="2025-01-30T13:46:31.403754898Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:46:31.919967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3587604612.mount: Deactivated successfully. Jan 30 13:46:31.927249 containerd[1460]: time="2025-01-30T13:46:31.927216966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:31.928084 containerd[1460]: time="2025-01-30T13:46:31.928038036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:46:31.929125 containerd[1460]: time="2025-01-30T13:46:31.929104245Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:31.931325 containerd[1460]: time="2025-01-30T13:46:31.931298080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:31.931961 containerd[1460]: time="2025-01-30T13:46:31.931923302Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 528.130273ms" Jan 30 13:46:31.932012 containerd[1460]: time="2025-01-30T13:46:31.931960041Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:46:31.932414 containerd[1460]: time="2025-01-30T13:46:31.932385639Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 13:46:32.398231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439461417.mount: Deactivated successfully. Jan 30 13:46:33.474713 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:46:33.487214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:33.645055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:33.650633 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:46:34.241158 kubelet[2009]: E0130 13:46:34.241097 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:46:34.245331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:46:34.245556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:46:34.797492 containerd[1460]: time="2025-01-30T13:46:34.797391558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:34.852904 containerd[1460]: time="2025-01-30T13:46:34.852833521Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 30 13:46:34.854638 containerd[1460]: time="2025-01-30T13:46:34.854590737Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:34.858311 containerd[1460]: time="2025-01-30T13:46:34.858258365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:34.859576 containerd[1460]: time="2025-01-30T13:46:34.859536702Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.927125225s" Jan 30 13:46:34.859576 containerd[1460]: time="2025-01-30T13:46:34.859573481Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 13:46:37.360051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:37.371234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:37.394595 systemd[1]: Reloading requested from client PID 2050 ('systemctl') (unit session-7.scope)... Jan 30 13:46:37.394610 systemd[1]: Reloading... Jan 30 13:46:37.485187 zram_generator::config[2090]: No configuration found. Jan 30 13:46:37.710693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:37.785725 systemd[1]: Reloading finished in 390 ms. Jan 30 13:46:37.837953 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:37.840894 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:46:37.841154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:37.851403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:38.002827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:38.007224 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:46:38.048255 kubelet[2139]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:38.048255 kubelet[2139]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:46:38.048255 kubelet[2139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:38.048732 kubelet[2139]: I0130 13:46:38.048319 2139 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:46:38.325696 kubelet[2139]: I0130 13:46:38.325597 2139 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:46:38.325696 kubelet[2139]: I0130 13:46:38.325625 2139 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:46:38.325878 kubelet[2139]: I0130 13:46:38.325870 2139 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:46:38.345982 kubelet[2139]: E0130 13:46:38.345935 2139 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:46:38.346891 kubelet[2139]: I0130 13:46:38.346859 2139 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:46:38.354837 kubelet[2139]: E0130 13:46:38.354796 2139 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:46:38.354837 kubelet[2139]: I0130 13:46:38.354835 2139 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:46:38.359825 kubelet[2139]: I0130 13:46:38.359789 2139 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:46:38.360108 kubelet[2139]: I0130 13:46:38.360070 2139 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:46:38.360265 kubelet[2139]: I0130 13:46:38.360100 2139 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:46:38.360265 kubelet[2139]: I0130 13:46:38.360265 2139 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:46:38.360383 kubelet[2139]: I0130 13:46:38.360274 2139 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:46:38.360424 kubelet[2139]: I0130 13:46:38.360410 2139 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:38.362872 kubelet[2139]: I0130 13:46:38.362846 2139 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:46:38.362872 kubelet[2139]: I0130 13:46:38.362863 2139 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:46:38.362927 kubelet[2139]: I0130 13:46:38.362879 2139 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:46:38.362927 kubelet[2139]: I0130 13:46:38.362888 2139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:46:38.365786 kubelet[2139]: I0130 13:46:38.365709 2139 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:46:38.366263 kubelet[2139]: I0130 13:46:38.366244 2139 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:46:38.367286 kubelet[2139]: W0130 13:46:38.367258 2139 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:46:38.368167 kubelet[2139]: W0130 13:46:38.368114 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 30 13:46:38.368222 kubelet[2139]: E0130 13:46:38.368167 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:46:38.369294 kubelet[2139]: I0130 13:46:38.369093 2139 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:46:38.369294 kubelet[2139]: I0130 13:46:38.369122 2139 server.go:1287] "Started kubelet" Jan 30 13:46:38.369720 kubelet[2139]: W0130 13:46:38.369682 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 30 13:46:38.369773 kubelet[2139]: E0130 13:46:38.369729 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:46:38.372743 kubelet[2139]: I0130 13:46:38.371549 2139 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:46:38.372743 kubelet[2139]: I0130 13:46:38.372451 2139 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:46:38.372743 kubelet[2139]: I0130 13:46:38.372622 2139 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:46:38.372854 kubelet[2139]: I0130 13:46:38.372751 2139 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:46:38.372854 kubelet[2139]: I0130 13:46:38.372800 2139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:46:38.372854 kubelet[2139]: I0130 13:46:38.372831 2139 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:46:38.374153 kubelet[2139]: I0130 13:46:38.374131 2139 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:46:38.374459 kubelet[2139]: E0130 13:46:38.374444 2139 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:38.376377 kubelet[2139]: I0130 13:46:38.376337 2139 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:46:38.376428 kubelet[2139]: I0130 13:46:38.376424 2139 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:46:38.376744 kubelet[2139]: E0130 13:46:38.374079 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c6f3b760d14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:46:38.36910722 +0000 UTC m=+0.358202789,LastTimestamp:2025-01-30 13:46:38.36910722 +0000 UTC m=+0.358202789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:46:38.376901 kubelet[2139]: W0130 13:46:38.376813 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 30 13:46:38.376901 kubelet[2139]: E0130 13:46:38.376855 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:46:38.377128 kubelet[2139]: E0130 13:46:38.377108 2139 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:46:38.377946 kubelet[2139]: E0130 13:46:38.377503 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="200ms" Jan 30 13:46:38.379008 kubelet[2139]: I0130 13:46:38.378981 2139 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:46:38.379072 kubelet[2139]: I0130 13:46:38.379063 2139 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:46:38.379248 kubelet[2139]: I0130 13:46:38.379233 2139 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:46:38.391197 kubelet[2139]: I0130 13:46:38.391171 2139 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:46:38.391197 kubelet[2139]: I0130 13:46:38.391189 2139 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:46:38.391197 kubelet[2139]: I0130 13:46:38.391204 2139 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:38.391922 kubelet[2139]: I0130 13:46:38.391894 2139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:46:38.393251 kubelet[2139]: I0130 13:46:38.393236 2139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:46:38.393675 kubelet[2139]: I0130 13:46:38.393317 2139 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:46:38.393675 kubelet[2139]: I0130 13:46:38.393350 2139 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:46:38.393675 kubelet[2139]: I0130 13:46:38.393358 2139 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:46:38.393675 kubelet[2139]: E0130 13:46:38.393404 2139 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:46:38.474670 kubelet[2139]: E0130 13:46:38.474612 2139 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:38.494132 kubelet[2139]: E0130 13:46:38.494072 2139 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:46:38.574740 kubelet[2139]: E0130 13:46:38.574695 2139 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:38.578328 kubelet[2139]: E0130 13:46:38.578235 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="400ms" Jan 30 13:46:38.675506 kubelet[2139]: E0130 13:46:38.675444 2139 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:38.694848 kubelet[2139]: E0130 13:46:38.694813 2139 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:46:38.736315 kubelet[2139]: I0130 13:46:38.736263 2139 policy_none.go:49] "None policy: Start" Jan 30 13:46:38.736315 kubelet[2139]: I0130 13:46:38.736303 2139 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:46:38.736315 kubelet[2139]: I0130 13:46:38.736319 2139 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:46:38.736525 kubelet[2139]: W0130 13:46:38.736393 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 30 13:46:38.736525 kubelet[2139]: E0130 13:46:38.736444 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:46:38.742007 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:46:38.754887 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:46:38.757800 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:46:38.773078 kubelet[2139]: I0130 13:46:38.773040 2139 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:46:38.773360 kubelet[2139]: I0130 13:46:38.773297 2139 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:46:38.773360 kubelet[2139]: I0130 13:46:38.773319 2139 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:46:38.773569 kubelet[2139]: I0130 13:46:38.773550 2139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:46:38.774416 kubelet[2139]: E0130 13:46:38.774380 2139 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:46:38.774416 kubelet[2139]: E0130 13:46:38.774411 2139 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:46:38.874875 kubelet[2139]: I0130 13:46:38.874744 2139 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:46:38.875166 kubelet[2139]: E0130 13:46:38.875121 2139 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jan 30 13:46:38.969244 kubelet[2139]: E0130 13:46:38.969116 2139 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.69:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.69:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c6f3b760d14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:46:38.36910722 +0000 UTC m=+0.358202789,LastTimestamp:2025-01-30 13:46:38.36910722 +0000 UTC m=+0.358202789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:46:38.978669 kubelet[2139]: E0130 13:46:38.978637 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="800ms" Jan 30 13:46:39.076872 kubelet[2139]: I0130 13:46:39.076845 2139 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:46:39.077301 kubelet[2139]: E0130 13:46:39.077130 2139 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jan 30 13:46:39.102286 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 30 13:46:39.119761 kubelet[2139]: E0130 13:46:39.119739 2139 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:46:39.122876 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 30 13:46:39.124310 kubelet[2139]: E0130 13:46:39.124293 2139 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:46:39.126490 systemd[1]: Created slice kubepods-burstable-podc98edd630725169b054af692ca042f83.slice - libcontainer container kubepods-burstable-podc98edd630725169b054af692ca042f83.slice. Jan 30 13:46:39.127779 kubelet[2139]: E0130 13:46:39.127760 2139 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:46:39.180113 kubelet[2139]: I0130 13:46:39.180092 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:39.180172 kubelet[2139]: I0130 13:46:39.180124 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:39.180172 kubelet[2139]: I0130 13:46:39.180154 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c98edd630725169b054af692ca042f83-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c98edd630725169b054af692ca042f83\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:39.180172 kubelet[2139]: I0130 13:46:39.180170 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c98edd630725169b054af692ca042f83-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c98edd630725169b054af692ca042f83\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:39.180266 kubelet[2139]: I0130 13:46:39.180186 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:39.180266 kubelet[2139]: I0130 13:46:39.180204 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:39.180266 kubelet[2139]: I0130 13:46:39.180223 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:39.180266 kubelet[2139]: I0130 13:46:39.180250 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:39.180266 kubelet[2139]: I0130 13:46:39.180264 2139 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c98edd630725169b054af692ca042f83-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c98edd630725169b054af692ca042f83\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:39.322949 kubelet[2139]: W0130 13:46:39.322909 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 30 13:46:39.323056 kubelet[2139]: E0130 13:46:39.322957 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:46:39.420761 kubelet[2139]: E0130 13:46:39.420682 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:39.421204 containerd[1460]: time="2025-01-30T13:46:39.421170023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:39.425378 kubelet[2139]: E0130 13:46:39.425361 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:39.427235 containerd[1460]: time="2025-01-30T13:46:39.427205762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:39.428437 kubelet[2139]: E0130 13:46:39.428411 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:39.428805 containerd[1460]: time="2025-01-30T13:46:39.428686138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c98edd630725169b054af692ca042f83,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:39.478843 kubelet[2139]: I0130 13:46:39.478822 2139 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:46:39.479108 kubelet[2139]: E0130 13:46:39.479081 2139 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jan 30 13:46:39.482466 kubelet[2139]: W0130 13:46:39.482416 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 30 13:46:39.482533 kubelet[2139]: E0130 13:46:39.482470 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:46:39.779633 kubelet[2139]: E0130 13:46:39.779491 2139 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.69:6443: connect: connection refused" interval="1.6s" Jan 30 13:46:39.920130 kubelet[2139]: W0130 13:46:39.920079 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 30 13:46:39.920130 kubelet[2139]: E0130 13:46:39.920130 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:46:40.106147 kubelet[2139]: W0130 13:46:40.105979 2139 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.69:6443: connect: connection refused Jan 30 13:46:40.106147 kubelet[2139]: E0130 13:46:40.106071 2139 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:46:40.280611 kubelet[2139]: I0130 13:46:40.280575 2139 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:46:40.281026 kubelet[2139]: E0130 13:46:40.280974 2139 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.69:6443/api/v1/nodes\": dial tcp 10.0.0.69:6443: connect: connection refused" node="localhost" Jan 30 13:46:40.373392 kubelet[2139]: E0130 13:46:40.373209 2139 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.69:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:46:40.411822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480646828.mount: Deactivated successfully. Jan 30 13:46:40.420598 containerd[1460]: time="2025-01-30T13:46:40.420516454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:40.423036 containerd[1460]: time="2025-01-30T13:46:40.422961339Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:46:40.424482 containerd[1460]: time="2025-01-30T13:46:40.424420636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:40.425567 containerd[1460]: time="2025-01-30T13:46:40.425536158Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:40.426632 containerd[1460]: time="2025-01-30T13:46:40.426575597Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:40.427820 containerd[1460]: time="2025-01-30T13:46:40.427739650Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:46:40.428970 containerd[1460]: time="2025-01-30T13:46:40.428921587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:46:40.431159 containerd[1460]: time="2025-01-30T13:46:40.431110432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:46:40.434146 containerd[1460]: time="2025-01-30T13:46:40.434100469Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.005369977s" Jan 30 13:46:40.435095 containerd[1460]: time="2025-01-30T13:46:40.435057283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.013809525s" Jan 30 13:46:40.438861 containerd[1460]: time="2025-01-30T13:46:40.438792619Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.011508249s" Jan 30 13:46:40.609903 containerd[1460]: time="2025-01-30T13:46:40.609790771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:40.611229 containerd[1460]: time="2025-01-30T13:46:40.609858899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:40.611229 containerd[1460]: time="2025-01-30T13:46:40.610907295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:40.611229 containerd[1460]: time="2025-01-30T13:46:40.611024936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:40.611229 containerd[1460]: time="2025-01-30T13:46:40.610746503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:40.611229 containerd[1460]: time="2025-01-30T13:46:40.610848495Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:40.611229 containerd[1460]: time="2025-01-30T13:46:40.610870396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:40.611229 containerd[1460]: time="2025-01-30T13:46:40.610948202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:40.614086 containerd[1460]: time="2025-01-30T13:46:40.613454863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:40.614086 containerd[1460]: time="2025-01-30T13:46:40.614044338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:40.614209 containerd[1460]: time="2025-01-30T13:46:40.614063073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:40.614209 containerd[1460]: time="2025-01-30T13:46:40.614149766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:40.634518 systemd[1]: Started cri-containerd-8d2e678a60685d5f2abed930e7ef3c163b3fc57ff8b669fa806ba76665841407.scope - libcontainer container 8d2e678a60685d5f2abed930e7ef3c163b3fc57ff8b669fa806ba76665841407. Jan 30 13:46:40.641055 systemd[1]: Started cri-containerd-24b3d926e7621fffd2602cc9084b385a8cc46e9addaaa1ad32c42dde5478e56b.scope - libcontainer container 24b3d926e7621fffd2602cc9084b385a8cc46e9addaaa1ad32c42dde5478e56b. Jan 30 13:46:40.643649 systemd[1]: Started cri-containerd-718be77ed26f0020985778b8d0c01904c078998d1b80abc34c570c8d52b8d24f.scope - libcontainer container 718be77ed26f0020985778b8d0c01904c078998d1b80abc34c570c8d52b8d24f. Jan 30 13:46:40.675491 containerd[1460]: time="2025-01-30T13:46:40.675444054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d2e678a60685d5f2abed930e7ef3c163b3fc57ff8b669fa806ba76665841407\"" Jan 30 13:46:40.676470 kubelet[2139]: E0130 13:46:40.676439 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.678261 containerd[1460]: time="2025-01-30T13:46:40.678234508Z" level=info msg="CreateContainer within sandbox \"8d2e678a60685d5f2abed930e7ef3c163b3fc57ff8b669fa806ba76665841407\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:46:40.687400 containerd[1460]: time="2025-01-30T13:46:40.687350724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c98edd630725169b054af692ca042f83,Namespace:kube-system,Attempt:0,} returns sandbox id \"718be77ed26f0020985778b8d0c01904c078998d1b80abc34c570c8d52b8d24f\"" Jan 30 13:46:40.687893 kubelet[2139]: E0130 13:46:40.687870 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.689748 containerd[1460]: time="2025-01-30T13:46:40.689702033Z" level=info msg="CreateContainer within sandbox \"718be77ed26f0020985778b8d0c01904c078998d1b80abc34c570c8d52b8d24f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:46:40.694364 containerd[1460]: time="2025-01-30T13:46:40.694292532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"24b3d926e7621fffd2602cc9084b385a8cc46e9addaaa1ad32c42dde5478e56b\"" Jan 30 13:46:40.695053 kubelet[2139]: E0130 13:46:40.695029 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:40.696673 containerd[1460]: time="2025-01-30T13:46:40.696635767Z" level=info msg="CreateContainer within sandbox \"24b3d926e7621fffd2602cc9084b385a8cc46e9addaaa1ad32c42dde5478e56b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:46:40.701080 containerd[1460]: time="2025-01-30T13:46:40.701051938Z" level=info msg="CreateContainer within sandbox \"8d2e678a60685d5f2abed930e7ef3c163b3fc57ff8b669fa806ba76665841407\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b69759a74f0527373a32af912a335dcfecc990bf9c87acd43a5bd8b117d16f64\"" Jan 30 13:46:40.701700 containerd[1460]: time="2025-01-30T13:46:40.701680267Z" level=info msg="StartContainer for \"b69759a74f0527373a32af912a335dcfecc990bf9c87acd43a5bd8b117d16f64\"" Jan 30 13:46:40.712894 containerd[1460]: time="2025-01-30T13:46:40.712840837Z" level=info msg="CreateContainer within sandbox \"718be77ed26f0020985778b8d0c01904c078998d1b80abc34c570c8d52b8d24f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ba6ab209c92c50e0b639452626720377d1d4df91c51b3a6189d18637f3130d08\"" Jan 30 13:46:40.713479 containerd[1460]: time="2025-01-30T13:46:40.713448076Z" level=info msg="StartContainer for \"ba6ab209c92c50e0b639452626720377d1d4df91c51b3a6189d18637f3130d08\"" Jan 30 13:46:40.724237 containerd[1460]: time="2025-01-30T13:46:40.724145998Z" level=info msg="CreateContainer within sandbox \"24b3d926e7621fffd2602cc9084b385a8cc46e9addaaa1ad32c42dde5478e56b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c75ef7fc89e24e088e5b8ac08d7dc96f5f19ee69fbcf048ef835e4f325f03b30\"" Jan 30 13:46:40.724646 containerd[1460]: time="2025-01-30T13:46:40.724618003Z" level=info msg="StartContainer for \"c75ef7fc89e24e088e5b8ac08d7dc96f5f19ee69fbcf048ef835e4f325f03b30\"" Jan 30 13:46:40.729188 systemd[1]: Started cri-containerd-b69759a74f0527373a32af912a335dcfecc990bf9c87acd43a5bd8b117d16f64.scope - libcontainer container b69759a74f0527373a32af912a335dcfecc990bf9c87acd43a5bd8b117d16f64. Jan 30 13:46:40.744167 systemd[1]: Started cri-containerd-ba6ab209c92c50e0b639452626720377d1d4df91c51b3a6189d18637f3130d08.scope - libcontainer container ba6ab209c92c50e0b639452626720377d1d4df91c51b3a6189d18637f3130d08. Jan 30 13:46:40.755127 systemd[1]: Started cri-containerd-c75ef7fc89e24e088e5b8ac08d7dc96f5f19ee69fbcf048ef835e4f325f03b30.scope - libcontainer container c75ef7fc89e24e088e5b8ac08d7dc96f5f19ee69fbcf048ef835e4f325f03b30. Jan 30 13:46:40.783240 containerd[1460]: time="2025-01-30T13:46:40.783097483Z" level=info msg="StartContainer for \"b69759a74f0527373a32af912a335dcfecc990bf9c87acd43a5bd8b117d16f64\" returns successfully" Jan 30 13:46:40.796549 containerd[1460]: time="2025-01-30T13:46:40.796425137Z" level=info msg="StartContainer for \"ba6ab209c92c50e0b639452626720377d1d4df91c51b3a6189d18637f3130d08\" returns successfully" Jan 30 13:46:40.800752 containerd[1460]: time="2025-01-30T13:46:40.800719340Z" level=info msg="StartContainer for \"c75ef7fc89e24e088e5b8ac08d7dc96f5f19ee69fbcf048ef835e4f325f03b30\" returns successfully" Jan 30 13:46:41.414030 kubelet[2139]: E0130 13:46:41.409767 2139 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:46:41.414030 kubelet[2139]: E0130 13:46:41.409898 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:41.414030 kubelet[2139]: E0130 13:46:41.412723 2139 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:46:41.414030 kubelet[2139]: E0130 13:46:41.412816 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:41.420021 kubelet[2139]: E0130 13:46:41.417366 2139 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:46:41.420021 kubelet[2139]: E0130 13:46:41.417466 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:41.697108 kubelet[2139]: E0130 13:46:41.696980 2139 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:46:41.882118 kubelet[2139]: I0130 13:46:41.882078 2139 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:46:41.886941 kubelet[2139]: I0130 13:46:41.886918 2139 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:46:41.887051 kubelet[2139]: E0130 13:46:41.886949 2139 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 13:46:41.977179 kubelet[2139]: I0130 13:46:41.977052 2139 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:41.980940 kubelet[2139]: E0130 13:46:41.980919 2139 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:41.980940 kubelet[2139]: I0130 13:46:41.980937 2139 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:41.982516 kubelet[2139]: E0130 13:46:41.982496 2139 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:41.982516 kubelet[2139]: I0130 13:46:41.982512 2139 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:41.983608 kubelet[2139]: E0130 13:46:41.983581 2139 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:42.365366 kubelet[2139]: I0130 13:46:42.365247 2139 apiserver.go:52] "Watching apiserver" Jan 30 13:46:42.377492 kubelet[2139]: I0130 13:46:42.377454 2139 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:46:42.416886 kubelet[2139]: I0130 13:46:42.416856 2139 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:42.417198 kubelet[2139]: I0130 13:46:42.416942 2139 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:42.419380 kubelet[2139]: E0130 13:46:42.418973 2139 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:42.419380 kubelet[2139]: E0130 13:46:42.419137 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:42.421238 kubelet[2139]: E0130 13:46:42.421190 2139 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:42.421464 kubelet[2139]: E0130 13:46:42.421393 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:43.418420 kubelet[2139]: I0130 13:46:43.418385 2139 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:43.418835 kubelet[2139]: I0130 13:46:43.418496 2139 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:43.424147 kubelet[2139]: E0130 13:46:43.424107 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:43.424414 kubelet[2139]: E0130 13:46:43.424393 2139 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:43.946076 systemd[1]: Reloading requested from client PID 2415 ('systemctl') (unit session-7.scope)... Jan 30 13:46:43.946093 systemd[1]: Reloading... Jan 30 13:46:44.042051 zram_generator::config[2460]: No configuration found. Jan 30 13:46:44.149639 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:46:44.239587 systemd[1]: Reloading finished in 293 ms. Jan 30 13:46:44.292286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:44.313500 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:46:44.313827 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:44.324520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:46:44.490800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:46:44.500419 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:46:44.544462 kubelet[2499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:44.544462 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:46:44.544462 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:46:44.544909 kubelet[2499]: I0130 13:46:44.544545 2499 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:46:44.551474 kubelet[2499]: I0130 13:46:44.551444 2499 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:46:44.551474 kubelet[2499]: I0130 13:46:44.551467 2499 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:46:44.551737 kubelet[2499]: I0130 13:46:44.551714 2499 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:46:44.553074 kubelet[2499]: I0130 13:46:44.553055 2499 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:46:44.556025 kubelet[2499]: I0130 13:46:44.555827 2499 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:46:44.560591 kubelet[2499]: E0130 13:46:44.560401 2499 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:46:44.560591 kubelet[2499]: I0130 13:46:44.560426 2499 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:46:44.565272 kubelet[2499]: I0130 13:46:44.565240 2499 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:46:44.565574 kubelet[2499]: I0130 13:46:44.565531 2499 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:46:44.565774 kubelet[2499]: I0130 13:46:44.565569 2499 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:46:44.565854 kubelet[2499]: I0130 13:46:44.565778 2499 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:46:44.565854 kubelet[2499]: I0130 13:46:44.565791 2499 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:46:44.565854 kubelet[2499]: I0130 13:46:44.565840 2499 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:44.566050 kubelet[2499]: I0130 13:46:44.566035 2499 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:46:44.566084 kubelet[2499]: I0130 13:46:44.566053 2499 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:46:44.566084 kubelet[2499]: I0130 13:46:44.566072 2499 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:46:44.566128 kubelet[2499]: I0130 13:46:44.566084 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:46:44.566951 kubelet[2499]: I0130 13:46:44.566875 2499 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:46:44.567491 kubelet[2499]: I0130 13:46:44.567449 2499 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:46:44.568262 kubelet[2499]: I0130 13:46:44.568229 2499 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:46:44.568298 kubelet[2499]: I0130 13:46:44.568273 2499 server.go:1287] "Started kubelet" Jan 30 13:46:44.569273 kubelet[2499]: I0130 13:46:44.569117 2499 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:46:44.569533 kubelet[2499]: I0130 13:46:44.569502 2499 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:46:44.570688 kubelet[2499]: I0130 13:46:44.570668 2499 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:46:44.570688 kubelet[2499]: I0130 13:46:44.570683 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:46:44.571813 kubelet[2499]: I0130 13:46:44.571773 2499 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:46:44.577071 kubelet[2499]: E0130 13:46:44.576964 2499 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:46:44.577071 kubelet[2499]: I0130 13:46:44.577011 2499 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:46:44.577174 kubelet[2499]: I0130 13:46:44.577162 2499 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:46:44.577423 kubelet[2499]: I0130 13:46:44.577409 2499 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:46:44.577607 kubelet[2499]: I0130 13:46:44.577596 2499 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:46:44.585466 kubelet[2499]: I0130 13:46:44.584143 2499 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:46:44.585466 kubelet[2499]: I0130 13:46:44.584168 2499 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:46:44.585466 kubelet[2499]: I0130 13:46:44.584466 2499 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:46:44.585466 kubelet[2499]: I0130 13:46:44.585214 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:46:44.590588 kubelet[2499]: E0130 13:46:44.589300 2499 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:46:44.590588 kubelet[2499]: I0130 13:46:44.590294 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:46:44.590588 kubelet[2499]: I0130 13:46:44.590324 2499 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:46:44.590588 kubelet[2499]: I0130 13:46:44.590347 2499 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:46:44.590588 kubelet[2499]: I0130 13:46:44.590356 2499 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:46:44.590588 kubelet[2499]: E0130 13:46:44.590410 2499 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:46:44.616606 kubelet[2499]: I0130 13:46:44.616578 2499 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:46:44.616606 kubelet[2499]: I0130 13:46:44.616598 2499 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:46:44.616769 kubelet[2499]: I0130 13:46:44.616620 2499 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:46:44.616797 kubelet[2499]: I0130 13:46:44.616769 2499 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:46:44.616797 kubelet[2499]: I0130 13:46:44.616778 2499 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:46:44.616797 kubelet[2499]: I0130 13:46:44.616795 2499 policy_none.go:49] "None policy: Start" Jan 30 13:46:44.616891 kubelet[2499]: I0130 13:46:44.616804 2499 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:46:44.616891 kubelet[2499]: I0130 13:46:44.616814 2499 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:46:44.616948 kubelet[2499]: I0130 13:46:44.616894 2499 state_mem.go:75] "Updated machine memory state" Jan 30 13:46:44.621144 kubelet[2499]: I0130 13:46:44.621124 2499 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:46:44.621452 kubelet[2499]: I0130 13:46:44.621321 2499 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:46:44.621452 kubelet[2499]: I0130 13:46:44.621338 2499 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:46:44.621546 kubelet[2499]: I0130 13:46:44.621476 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:46:44.622369 kubelet[2499]: E0130 13:46:44.622346 2499 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:46:44.691806 kubelet[2499]: I0130 13:46:44.691746 2499 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:44.692069 kubelet[2499]: I0130 13:46:44.691763 2499 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:44.692224 kubelet[2499]: I0130 13:46:44.691796 2499 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:44.698701 kubelet[2499]: E0130 13:46:44.698678 2499 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:44.699349 kubelet[2499]: E0130 13:46:44.699321 2499 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:44.726389 kubelet[2499]: I0130 13:46:44.726366 2499 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:46:44.732062 kubelet[2499]: I0130 13:46:44.732019 2499 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 30 13:46:44.732208 kubelet[2499]: I0130 13:46:44.732097 2499 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:46:44.879512 kubelet[2499]: I0130 13:46:44.879387 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:44.879512 kubelet[2499]: I0130 13:46:44.879421 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:44.879512 kubelet[2499]: I0130 13:46:44.879444 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:44.879512 kubelet[2499]: I0130 13:46:44.879463 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:44.879512 kubelet[2499]: I0130 13:46:44.879478 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:44.879721 kubelet[2499]: I0130 13:46:44.879494 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c98edd630725169b054af692ca042f83-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c98edd630725169b054af692ca042f83\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:44.879721 kubelet[2499]: I0130 13:46:44.879510 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:46:44.879721 kubelet[2499]: I0130 13:46:44.879524 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c98edd630725169b054af692ca042f83-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c98edd630725169b054af692ca042f83\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:44.879721 kubelet[2499]: I0130 13:46:44.879539 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c98edd630725169b054af692ca042f83-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c98edd630725169b054af692ca042f83\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:44.999881 kubelet[2499]: E0130 13:46:44.999831 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:44.999881 kubelet[2499]: E0130 13:46:44.999892 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:45.000095 kubelet[2499]: E0130 13:46:44.999929 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:45.567224 kubelet[2499]: I0130 13:46:45.567177 2499 apiserver.go:52] "Watching apiserver" Jan 30 13:46:45.578448 kubelet[2499]: I0130 13:46:45.578403 2499 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:46:45.600357 kubelet[2499]: I0130 13:46:45.600195 2499 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:45.601029 kubelet[2499]: I0130 13:46:45.600580 2499 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:45.601029 kubelet[2499]: E0130 13:46:45.600585 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:45.606315 kubelet[2499]: E0130 13:46:45.606275 2499 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:46:45.606468 kubelet[2499]: E0130 13:46:45.606445 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:45.606537 kubelet[2499]: E0130 13:46:45.606517 2499 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:46:45.606612 kubelet[2499]: E0130 13:46:45.606590 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:45.631648 kubelet[2499]: I0130 13:46:45.630347 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.630326448 podStartE2EDuration="2.630326448s" podCreationTimestamp="2025-01-30 13:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:45.619373088 +0000 UTC m=+1.114294669" watchObservedRunningTime="2025-01-30 13:46:45.630326448 +0000 UTC m=+1.125248019" Jan 30 13:46:45.631648 kubelet[2499]: I0130 13:46:45.631055 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6310469300000001 podStartE2EDuration="1.63104693s" podCreationTimestamp="2025-01-30 13:46:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:45.630223471 +0000 UTC m=+1.125145052" watchObservedRunningTime="2025-01-30 13:46:45.63104693 +0000 UTC m=+1.125968521" Jan 30 13:46:45.640536 kubelet[2499]: I0130 13:46:45.640400 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.640383331 podStartE2EDuration="2.640383331s" podCreationTimestamp="2025-01-30 13:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:45.640295703 +0000 UTC m=+1.135217284" watchObservedRunningTime="2025-01-30 13:46:45.640383331 +0000 UTC m=+1.135304912" Jan 30 13:46:46.601168 kubelet[2499]: E0130 13:46:46.601116 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:46.601168 kubelet[2499]: E0130 13:46:46.601171 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:48.962406 sudo[1640]: pam_unix(sudo:session): session closed for user root Jan 30 13:46:48.964277 sshd[1637]: pam_unix(sshd:session): session closed for user core Jan 30 13:46:48.967769 systemd[1]: sshd@6-10.0.0.69:22-10.0.0.1:56356.service: Deactivated successfully. Jan 30 13:46:48.969729 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:46:48.969903 systemd[1]: session-7.scope: Consumed 4.557s CPU time, 162.1M memory peak, 0B memory swap peak. Jan 30 13:46:48.970353 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:46:48.971301 systemd-logind[1449]: Removed session 7. Jan 30 13:46:49.655965 kubelet[2499]: I0130 13:46:49.655935 2499 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:46:49.656466 containerd[1460]: time="2025-01-30T13:46:49.656239624Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:46:49.656762 kubelet[2499]: I0130 13:46:49.656744 2499 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:46:50.395641 systemd[1]: Created slice kubepods-besteffort-podd9d7c273_d46f_43e4_bea4_90d5ceae348f.slice - libcontainer container kubepods-besteffort-podd9d7c273_d46f_43e4_bea4_90d5ceae348f.slice. Jan 30 13:46:50.415192 kubelet[2499]: I0130 13:46:50.415144 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9d7c273-d46f-43e4-bea4-90d5ceae348f-lib-modules\") pod \"kube-proxy-mxdbf\" (UID: \"d9d7c273-d46f-43e4-bea4-90d5ceae348f\") " pod="kube-system/kube-proxy-mxdbf" Jan 30 13:46:50.415192 kubelet[2499]: I0130 13:46:50.415187 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9d7c273-d46f-43e4-bea4-90d5ceae348f-kube-proxy\") pod \"kube-proxy-mxdbf\" (UID: \"d9d7c273-d46f-43e4-bea4-90d5ceae348f\") " pod="kube-system/kube-proxy-mxdbf" Jan 30 13:46:50.415342 kubelet[2499]: I0130 13:46:50.415203 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9d7c273-d46f-43e4-bea4-90d5ceae348f-xtables-lock\") pod \"kube-proxy-mxdbf\" (UID: \"d9d7c273-d46f-43e4-bea4-90d5ceae348f\") " pod="kube-system/kube-proxy-mxdbf" Jan 30 13:46:50.415342 kubelet[2499]: I0130 13:46:50.415221 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z229p\" (UniqueName: \"kubernetes.io/projected/d9d7c273-d46f-43e4-bea4-90d5ceae348f-kube-api-access-z229p\") pod \"kube-proxy-mxdbf\" (UID: \"d9d7c273-d46f-43e4-bea4-90d5ceae348f\") " pod="kube-system/kube-proxy-mxdbf" Jan 30 13:46:50.458980 kubelet[2499]: E0130 13:46:50.458912 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:50.520147 kubelet[2499]: E0130 13:46:50.520096 2499 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 13:46:50.520147 kubelet[2499]: E0130 13:46:50.520133 2499 projected.go:194] Error preparing data for projected volume kube-api-access-z229p for pod kube-system/kube-proxy-mxdbf: configmap "kube-root-ca.crt" not found Jan 30 13:46:50.520324 kubelet[2499]: E0130 13:46:50.520190 2499 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9d7c273-d46f-43e4-bea4-90d5ceae348f-kube-api-access-z229p podName:d9d7c273-d46f-43e4-bea4-90d5ceae348f nodeName:}" failed. No retries permitted until 2025-01-30 13:46:51.020169943 +0000 UTC m=+6.515091524 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z229p" (UniqueName: "kubernetes.io/projected/d9d7c273-d46f-43e4-bea4-90d5ceae348f-kube-api-access-z229p") pod "kube-proxy-mxdbf" (UID: "d9d7c273-d46f-43e4-bea4-90d5ceae348f") : configmap "kube-root-ca.crt" not found Jan 30 13:46:50.606202 kubelet[2499]: E0130 13:46:50.606168 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:50.709669 systemd[1]: Created slice kubepods-besteffort-pod8a85c304_e0df_4ff1_9478_7a403e08b6e3.slice - libcontainer container kubepods-besteffort-pod8a85c304_e0df_4ff1_9478_7a403e08b6e3.slice. Jan 30 13:46:50.717343 kubelet[2499]: I0130 13:46:50.717283 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8a85c304-e0df-4ff1-9478-7a403e08b6e3-var-lib-calico\") pod \"tigera-operator-7d68577dc5-dq8lw\" (UID: \"8a85c304-e0df-4ff1-9478-7a403e08b6e3\") " pod="tigera-operator/tigera-operator-7d68577dc5-dq8lw" Jan 30 13:46:50.717343 kubelet[2499]: I0130 13:46:50.717340 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlpmc\" (UniqueName: \"kubernetes.io/projected/8a85c304-e0df-4ff1-9478-7a403e08b6e3-kube-api-access-wlpmc\") pod \"tigera-operator-7d68577dc5-dq8lw\" (UID: \"8a85c304-e0df-4ff1-9478-7a403e08b6e3\") " pod="tigera-operator/tigera-operator-7d68577dc5-dq8lw" Jan 30 13:46:51.013763 containerd[1460]: time="2025-01-30T13:46:51.013719422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-dq8lw,Uid:8a85c304-e0df-4ff1-9478-7a403e08b6e3,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:46:51.057883 containerd[1460]: time="2025-01-30T13:46:51.057753497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:51.057883 containerd[1460]: time="2025-01-30T13:46:51.057843178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:51.057883 containerd[1460]: time="2025-01-30T13:46:51.057858727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:51.058129 containerd[1460]: time="2025-01-30T13:46:51.057980879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:51.081219 systemd[1]: Started cri-containerd-bf8a6a8680ea5450c302aa9be86940b80af5c77ec439b0c70bc34f2cdeaf79c7.scope - libcontainer container bf8a6a8680ea5450c302aa9be86940b80af5c77ec439b0c70bc34f2cdeaf79c7. Jan 30 13:46:51.119187 containerd[1460]: time="2025-01-30T13:46:51.119129908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-dq8lw,Uid:8a85c304-e0df-4ff1-9478-7a403e08b6e3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bf8a6a8680ea5450c302aa9be86940b80af5c77ec439b0c70bc34f2cdeaf79c7\"" Jan 30 13:46:51.121525 containerd[1460]: time="2025-01-30T13:46:51.121252760Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:46:51.309585 kubelet[2499]: E0130 13:46:51.309463 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:51.310283 containerd[1460]: time="2025-01-30T13:46:51.309913031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxdbf,Uid:d9d7c273-d46f-43e4-bea4-90d5ceae348f,Namespace:kube-system,Attempt:0,}" Jan 30 13:46:51.430294 containerd[1460]: time="2025-01-30T13:46:51.430186725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:46:51.430294 containerd[1460]: time="2025-01-30T13:46:51.430257650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:46:51.430294 containerd[1460]: time="2025-01-30T13:46:51.430268912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:51.430517 containerd[1460]: time="2025-01-30T13:46:51.430443164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:46:51.453156 systemd[1]: Started cri-containerd-34b337b45ffbd62fe6d935610dd4ca63096571b60c5036917071a6f04d9769fa.scope - libcontainer container 34b337b45ffbd62fe6d935610dd4ca63096571b60c5036917071a6f04d9769fa. Jan 30 13:46:51.482488 containerd[1460]: time="2025-01-30T13:46:51.482428847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mxdbf,Uid:d9d7c273-d46f-43e4-bea4-90d5ceae348f,Namespace:kube-system,Attempt:0,} returns sandbox id \"34b337b45ffbd62fe6d935610dd4ca63096571b60c5036917071a6f04d9769fa\"" Jan 30 13:46:51.483290 kubelet[2499]: E0130 13:46:51.483265 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:51.488227 containerd[1460]: time="2025-01-30T13:46:51.488181649Z" level=info msg="CreateContainer within sandbox \"34b337b45ffbd62fe6d935610dd4ca63096571b60c5036917071a6f04d9769fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:46:51.729704 containerd[1460]: time="2025-01-30T13:46:51.729650791Z" level=info msg="CreateContainer within sandbox \"34b337b45ffbd62fe6d935610dd4ca63096571b60c5036917071a6f04d9769fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7ad5f4e6aeba8186cfbdac551561359519cac31a517b8882103c1f05568c03ec\"" Jan 30 13:46:51.730436 containerd[1460]: time="2025-01-30T13:46:51.730390078Z" level=info msg="StartContainer for \"7ad5f4e6aeba8186cfbdac551561359519cac31a517b8882103c1f05568c03ec\"" Jan 30 13:46:51.756141 systemd[1]: Started cri-containerd-7ad5f4e6aeba8186cfbdac551561359519cac31a517b8882103c1f05568c03ec.scope - libcontainer container 7ad5f4e6aeba8186cfbdac551561359519cac31a517b8882103c1f05568c03ec. Jan 30 13:46:51.800222 containerd[1460]: time="2025-01-30T13:46:51.800171633Z" level=info msg="StartContainer for \"7ad5f4e6aeba8186cfbdac551561359519cac31a517b8882103c1f05568c03ec\" returns successfully" Jan 30 13:46:52.612206 kubelet[2499]: E0130 13:46:52.612173 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:52.749896 kubelet[2499]: I0130 13:46:52.749762 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mxdbf" podStartSLOduration=2.749743981 podStartE2EDuration="2.749743981s" podCreationTimestamp="2025-01-30 13:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:46:52.749726318 +0000 UTC m=+8.244647899" watchObservedRunningTime="2025-01-30 13:46:52.749743981 +0000 UTC m=+8.244665562" Jan 30 13:46:53.266373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733797228.mount: Deactivated successfully. Jan 30 13:46:53.333956 kubelet[2499]: E0130 13:46:53.333922 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:53.615079 kubelet[2499]: E0130 13:46:53.614715 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:53.615692 kubelet[2499]: E0130 13:46:53.615574 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:53.662679 kubelet[2499]: E0130 13:46:53.662630 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:53.671518 containerd[1460]: time="2025-01-30T13:46:53.671467551Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:53.673051 containerd[1460]: time="2025-01-30T13:46:53.672971269Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:46:53.700959 containerd[1460]: time="2025-01-30T13:46:53.700912739Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:53.755284 containerd[1460]: time="2025-01-30T13:46:53.755232098Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:46:53.756385 containerd[1460]: time="2025-01-30T13:46:53.756318412Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.634994968s" Jan 30 13:46:53.756385 containerd[1460]: time="2025-01-30T13:46:53.756368969Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:46:53.758535 containerd[1460]: time="2025-01-30T13:46:53.758507602Z" level=info msg="CreateContainer within sandbox \"bf8a6a8680ea5450c302aa9be86940b80af5c77ec439b0c70bc34f2cdeaf79c7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:46:54.046801 containerd[1460]: time="2025-01-30T13:46:54.046732108Z" level=info msg="CreateContainer within sandbox \"bf8a6a8680ea5450c302aa9be86940b80af5c77ec439b0c70bc34f2cdeaf79c7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf8cf43d182fa8f658bee7ceb635dd7e4ab7ffd4c37069660b414e91128c2b76\"" Jan 30 13:46:54.047351 containerd[1460]: time="2025-01-30T13:46:54.047301328Z" level=info msg="StartContainer for \"bf8cf43d182fa8f658bee7ceb635dd7e4ab7ffd4c37069660b414e91128c2b76\"" Jan 30 13:46:54.078183 systemd[1]: Started cri-containerd-bf8cf43d182fa8f658bee7ceb635dd7e4ab7ffd4c37069660b414e91128c2b76.scope - libcontainer container bf8cf43d182fa8f658bee7ceb635dd7e4ab7ffd4c37069660b414e91128c2b76. Jan 30 13:46:54.103946 containerd[1460]: time="2025-01-30T13:46:54.103886023Z" level=info msg="StartContainer for \"bf8cf43d182fa8f658bee7ceb635dd7e4ab7ffd4c37069660b414e91128c2b76\" returns successfully" Jan 30 13:46:54.618693 kubelet[2499]: E0130 13:46:54.618523 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:54.618693 kubelet[2499]: E0130 13:46:54.618604 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:55.464802 update_engine[1452]: I20250130 13:46:55.464699 1452 update_attempter.cc:509] Updating boot flags... Jan 30 13:46:55.492024 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2895) Jan 30 13:46:55.525095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2897) Jan 30 13:46:55.561041 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2897) Jan 30 13:46:56.923283 kubelet[2499]: I0130 13:46:56.923215 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-dq8lw" podStartSLOduration=4.286730723 podStartE2EDuration="6.923194093s" podCreationTimestamp="2025-01-30 13:46:50 +0000 UTC" firstStartedPulling="2025-01-30 13:46:51.120769079 +0000 UTC m=+6.615690660" lastFinishedPulling="2025-01-30 13:46:53.757232449 +0000 UTC m=+9.252154030" observedRunningTime="2025-01-30 13:46:54.669789612 +0000 UTC m=+10.164711193" watchObservedRunningTime="2025-01-30 13:46:56.923194093 +0000 UTC m=+12.418115674" Jan 30 13:46:56.925015 kubelet[2499]: W0130 13:46:56.924855 2499 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 30 13:46:56.925015 kubelet[2499]: E0130 13:46:56.924926 2499 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 30 13:46:56.926800 kubelet[2499]: W0130 13:46:56.926381 2499 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 30 13:46:56.926800 kubelet[2499]: E0130 13:46:56.926419 2499 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 30 13:46:56.928662 kubelet[2499]: W0130 13:46:56.928604 2499 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 30 13:46:56.928822 kubelet[2499]: E0130 13:46:56.928798 2499 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 30 13:46:56.935280 systemd[1]: Created slice kubepods-besteffort-pode2f201ee_2d39_4859_bee4_64580b5b48af.slice - libcontainer container kubepods-besteffort-pode2f201ee_2d39_4859_bee4_64580b5b48af.slice. Jan 30 13:46:56.957387 kubelet[2499]: I0130 13:46:56.957333 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e2f201ee-2d39-4859-bee4-64580b5b48af-typha-certs\") pod \"calico-typha-5bfcf4cb7b-j5hsw\" (UID: \"e2f201ee-2d39-4859-bee4-64580b5b48af\") " pod="calico-system/calico-typha-5bfcf4cb7b-j5hsw" Jan 30 13:46:56.957387 kubelet[2499]: I0130 13:46:56.957389 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2f201ee-2d39-4859-bee4-64580b5b48af-tigera-ca-bundle\") pod \"calico-typha-5bfcf4cb7b-j5hsw\" (UID: \"e2f201ee-2d39-4859-bee4-64580b5b48af\") " pod="calico-system/calico-typha-5bfcf4cb7b-j5hsw" Jan 30 13:46:56.957567 kubelet[2499]: I0130 13:46:56.957415 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf266\" (UniqueName: \"kubernetes.io/projected/e2f201ee-2d39-4859-bee4-64580b5b48af-kube-api-access-xf266\") pod \"calico-typha-5bfcf4cb7b-j5hsw\" (UID: \"e2f201ee-2d39-4859-bee4-64580b5b48af\") " pod="calico-system/calico-typha-5bfcf4cb7b-j5hsw" Jan 30 13:46:56.991712 systemd[1]: Created slice kubepods-besteffort-pod67298bc6_2e04_4196_92d4_fb4cf92e3223.slice - libcontainer container kubepods-besteffort-pod67298bc6_2e04_4196_92d4_fb4cf92e3223.slice. Jan 30 13:46:57.057666 kubelet[2499]: I0130 13:46:57.057625 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67298bc6-2e04-4196-92d4-fb4cf92e3223-tigera-ca-bundle\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057666 kubelet[2499]: I0130 13:46:57.057665 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/67298bc6-2e04-4196-92d4-fb4cf92e3223-cni-log-dir\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057801 kubelet[2499]: I0130 13:46:57.057682 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/67298bc6-2e04-4196-92d4-fb4cf92e3223-node-certs\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057801 kubelet[2499]: I0130 13:46:57.057705 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67298bc6-2e04-4196-92d4-fb4cf92e3223-xtables-lock\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057801 kubelet[2499]: I0130 13:46:57.057718 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/67298bc6-2e04-4196-92d4-fb4cf92e3223-cni-net-dir\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057801 kubelet[2499]: I0130 13:46:57.057731 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/67298bc6-2e04-4196-92d4-fb4cf92e3223-var-run-calico\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057801 kubelet[2499]: I0130 13:46:57.057745 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67298bc6-2e04-4196-92d4-fb4cf92e3223-var-lib-calico\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057932 kubelet[2499]: I0130 13:46:57.057760 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/67298bc6-2e04-4196-92d4-fb4cf92e3223-flexvol-driver-host\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057932 kubelet[2499]: I0130 13:46:57.057777 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wqt5\" (UniqueName: \"kubernetes.io/projected/67298bc6-2e04-4196-92d4-fb4cf92e3223-kube-api-access-7wqt5\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057932 kubelet[2499]: I0130 13:46:57.057791 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/67298bc6-2e04-4196-92d4-fb4cf92e3223-policysync\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057932 kubelet[2499]: I0130 13:46:57.057805 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/67298bc6-2e04-4196-92d4-fb4cf92e3223-cni-bin-dir\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.057932 kubelet[2499]: I0130 13:46:57.057831 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67298bc6-2e04-4196-92d4-fb4cf92e3223-lib-modules\") pod \"calico-node-6rls9\" (UID: \"67298bc6-2e04-4196-92d4-fb4cf92e3223\") " pod="calico-system/calico-node-6rls9" Jan 30 13:46:57.082892 kubelet[2499]: E0130 13:46:57.082836 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:46:57.158541 kubelet[2499]: I0130 13:46:57.158482 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3feaebfa-27ef-455c-82db-977542f57659-kubelet-dir\") pod \"csi-node-driver-t9h8r\" (UID: \"3feaebfa-27ef-455c-82db-977542f57659\") " pod="calico-system/csi-node-driver-t9h8r" Jan 30 13:46:57.158686 kubelet[2499]: I0130 13:46:57.158555 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3feaebfa-27ef-455c-82db-977542f57659-varrun\") pod \"csi-node-driver-t9h8r\" (UID: \"3feaebfa-27ef-455c-82db-977542f57659\") " pod="calico-system/csi-node-driver-t9h8r" Jan 30 13:46:57.158686 kubelet[2499]: I0130 13:46:57.158582 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3feaebfa-27ef-455c-82db-977542f57659-socket-dir\") pod \"csi-node-driver-t9h8r\" (UID: \"3feaebfa-27ef-455c-82db-977542f57659\") " pod="calico-system/csi-node-driver-t9h8r" Jan 30 13:46:57.158686 kubelet[2499]: I0130 13:46:57.158634 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3feaebfa-27ef-455c-82db-977542f57659-registration-dir\") pod \"csi-node-driver-t9h8r\" (UID: \"3feaebfa-27ef-455c-82db-977542f57659\") " pod="calico-system/csi-node-driver-t9h8r" Jan 30 13:46:57.158769 kubelet[2499]: I0130 13:46:57.158723 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2nv6\" (UniqueName: \"kubernetes.io/projected/3feaebfa-27ef-455c-82db-977542f57659-kube-api-access-h2nv6\") pod \"csi-node-driver-t9h8r\" (UID: \"3feaebfa-27ef-455c-82db-977542f57659\") " pod="calico-system/csi-node-driver-t9h8r" Jan 30 13:46:57.163787 kubelet[2499]: E0130 13:46:57.163751 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.163787 kubelet[2499]: W0130 13:46:57.163776 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.163908 kubelet[2499]: E0130 13:46:57.163797 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.259743 kubelet[2499]: E0130 13:46:57.259712 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.259743 kubelet[2499]: W0130 13:46:57.259731 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.259877 kubelet[2499]: E0130 13:46:57.259750 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.260024 kubelet[2499]: E0130 13:46:57.260009 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.260024 kubelet[2499]: W0130 13:46:57.260021 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.260099 kubelet[2499]: E0130 13:46:57.260038 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.260313 kubelet[2499]: E0130 13:46:57.260297 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.260313 kubelet[2499]: W0130 13:46:57.260307 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.260356 kubelet[2499]: E0130 13:46:57.260320 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.260745 kubelet[2499]: E0130 13:46:57.260709 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.260745 kubelet[2499]: W0130 13:46:57.260738 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.260851 kubelet[2499]: E0130 13:46:57.260771 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.261045 kubelet[2499]: E0130 13:46:57.261030 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.261077 kubelet[2499]: W0130 13:46:57.261051 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.261077 kubelet[2499]: E0130 13:46:57.261070 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.261379 kubelet[2499]: E0130 13:46:57.261361 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.261415 kubelet[2499]: W0130 13:46:57.261378 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.261415 kubelet[2499]: E0130 13:46:57.261397 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.261671 kubelet[2499]: E0130 13:46:57.261655 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.261671 kubelet[2499]: W0130 13:46:57.261668 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.261722 kubelet[2499]: E0130 13:46:57.261707 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.261961 kubelet[2499]: E0130 13:46:57.261885 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.261961 kubelet[2499]: W0130 13:46:57.261904 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.261961 kubelet[2499]: E0130 13:46:57.261936 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.262209 kubelet[2499]: E0130 13:46:57.262193 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.262209 kubelet[2499]: W0130 13:46:57.262207 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.262327 kubelet[2499]: E0130 13:46:57.262276 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.262510 kubelet[2499]: E0130 13:46:57.262486 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.262556 kubelet[2499]: W0130 13:46:57.262509 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.262582 kubelet[2499]: E0130 13:46:57.262553 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.262794 kubelet[2499]: E0130 13:46:57.262776 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.262847 kubelet[2499]: W0130 13:46:57.262791 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.262887 kubelet[2499]: E0130 13:46:57.262869 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.263270 kubelet[2499]: E0130 13:46:57.263122 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.263270 kubelet[2499]: W0130 13:46:57.263254 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.263359 kubelet[2499]: E0130 13:46:57.263273 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.263535 kubelet[2499]: E0130 13:46:57.263522 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.263535 kubelet[2499]: W0130 13:46:57.263533 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.263591 kubelet[2499]: E0130 13:46:57.263548 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.263792 kubelet[2499]: E0130 13:46:57.263765 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.263792 kubelet[2499]: W0130 13:46:57.263781 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.263881 kubelet[2499]: E0130 13:46:57.263853 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.264059 kubelet[2499]: E0130 13:46:57.264047 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.264059 kubelet[2499]: W0130 13:46:57.264056 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.264282 kubelet[2499]: E0130 13:46:57.264142 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.264321 kubelet[2499]: E0130 13:46:57.264247 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.264321 kubelet[2499]: W0130 13:46:57.264315 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.264375 kubelet[2499]: E0130 13:46:57.264343 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.264584 kubelet[2499]: E0130 13:46:57.264567 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.264584 kubelet[2499]: W0130 13:46:57.264580 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.264634 kubelet[2499]: E0130 13:46:57.264624 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.264845 kubelet[2499]: E0130 13:46:57.264812 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.264845 kubelet[2499]: W0130 13:46:57.264827 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.264903 kubelet[2499]: E0130 13:46:57.264846 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.265205 kubelet[2499]: E0130 13:46:57.265189 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.265205 kubelet[2499]: W0130 13:46:57.265203 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.265251 kubelet[2499]: E0130 13:46:57.265221 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.265508 kubelet[2499]: E0130 13:46:57.265493 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.265508 kubelet[2499]: W0130 13:46:57.265507 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.265569 kubelet[2499]: E0130 13:46:57.265526 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.265799 kubelet[2499]: E0130 13:46:57.265783 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.265837 kubelet[2499]: W0130 13:46:57.265797 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.265890 kubelet[2499]: E0130 13:46:57.265873 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.266220 kubelet[2499]: E0130 13:46:57.266205 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.266220 kubelet[2499]: W0130 13:46:57.266219 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.266302 kubelet[2499]: E0130 13:46:57.266249 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.266477 kubelet[2499]: E0130 13:46:57.266460 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.266477 kubelet[2499]: W0130 13:46:57.266474 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.266526 kubelet[2499]: E0130 13:46:57.266492 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.266764 kubelet[2499]: E0130 13:46:57.266752 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.266764 kubelet[2499]: W0130 13:46:57.266762 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.266810 kubelet[2499]: E0130 13:46:57.266771 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:57.267080 kubelet[2499]: E0130 13:46:57.267064 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:57.267080 kubelet[2499]: W0130 13:46:57.267079 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:57.267133 kubelet[2499]: E0130 13:46:57.267091 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.058533 kubelet[2499]: E0130 13:46:58.058493 2499 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:46:58.058904 kubelet[2499]: E0130 13:46:58.058576 2499 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2f201ee-2d39-4859-bee4-64580b5b48af-tigera-ca-bundle podName:e2f201ee-2d39-4859-bee4-64580b5b48af nodeName:}" failed. No retries permitted until 2025-01-30 13:46:58.558556541 +0000 UTC m=+14.053478122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/e2f201ee-2d39-4859-bee4-64580b5b48af-tigera-ca-bundle") pod "calico-typha-5bfcf4cb7b-j5hsw" (UID: "e2f201ee-2d39-4859-bee4-64580b5b48af") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:46:58.058904 kubelet[2499]: E0130 13:46:58.058494 2499 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jan 30 13:46:58.058904 kubelet[2499]: E0130 13:46:58.058614 2499 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2f201ee-2d39-4859-bee4-64580b5b48af-typha-certs podName:e2f201ee-2d39-4859-bee4-64580b5b48af nodeName:}" failed. No retries permitted until 2025-01-30 13:46:58.558607457 +0000 UTC m=+14.053529039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/e2f201ee-2d39-4859-bee4-64580b5b48af-typha-certs") pod "calico-typha-5bfcf4cb7b-j5hsw" (UID: "e2f201ee-2d39-4859-bee4-64580b5b48af") : failed to sync secret cache: timed out waiting for the condition Jan 30 13:46:58.061805 kubelet[2499]: E0130 13:46:58.061765 2499 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:46:58.061805 kubelet[2499]: E0130 13:46:58.061792 2499 projected.go:194] Error preparing data for projected volume kube-api-access-xf266 for pod calico-system/calico-typha-5bfcf4cb7b-j5hsw: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:46:58.062006 kubelet[2499]: E0130 13:46:58.061832 2499 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e2f201ee-2d39-4859-bee4-64580b5b48af-kube-api-access-xf266 podName:e2f201ee-2d39-4859-bee4-64580b5b48af nodeName:}" failed. No retries permitted until 2025-01-30 13:46:58.561821252 +0000 UTC m=+14.056742833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xf266" (UniqueName: "kubernetes.io/projected/e2f201ee-2d39-4859-bee4-64580b5b48af-kube-api-access-xf266") pod "calico-typha-5bfcf4cb7b-j5hsw" (UID: "e2f201ee-2d39-4859-bee4-64580b5b48af") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:46:58.068721 kubelet[2499]: E0130 13:46:58.068698 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.068721 kubelet[2499]: W0130 13:46:58.068718 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.068801 kubelet[2499]: E0130 13:46:58.068737 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.069049 kubelet[2499]: E0130 13:46:58.069031 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.069091 kubelet[2499]: W0130 13:46:58.069049 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.069091 kubelet[2499]: E0130 13:46:58.069072 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.069401 kubelet[2499]: E0130 13:46:58.069376 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.069401 kubelet[2499]: W0130 13:46:58.069392 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.069443 kubelet[2499]: E0130 13:46:58.069403 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.159728 kubelet[2499]: E0130 13:46:58.159690 2499 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:46:58.159870 kubelet[2499]: E0130 13:46:58.159753 2499 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/67298bc6-2e04-4196-92d4-fb4cf92e3223-tigera-ca-bundle podName:67298bc6-2e04-4196-92d4-fb4cf92e3223 nodeName:}" failed. No retries permitted until 2025-01-30 13:46:58.659736194 +0000 UTC m=+14.154657776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/67298bc6-2e04-4196-92d4-fb4cf92e3223-tigera-ca-bundle") pod "calico-node-6rls9" (UID: "67298bc6-2e04-4196-92d4-fb4cf92e3223") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:46:58.163695 kubelet[2499]: E0130 13:46:58.163670 2499 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:46:58.163695 kubelet[2499]: E0130 13:46:58.163691 2499 projected.go:194] Error preparing data for projected volume kube-api-access-7wqt5 for pod calico-system/calico-node-6rls9: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:46:58.163763 kubelet[2499]: E0130 13:46:58.163726 2499 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/67298bc6-2e04-4196-92d4-fb4cf92e3223-kube-api-access-7wqt5 podName:67298bc6-2e04-4196-92d4-fb4cf92e3223 nodeName:}" failed. No retries permitted until 2025-01-30 13:46:58.66371577 +0000 UTC m=+14.158637352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7wqt5" (UniqueName: "kubernetes.io/projected/67298bc6-2e04-4196-92d4-fb4cf92e3223-kube-api-access-7wqt5") pod "calico-node-6rls9" (UID: "67298bc6-2e04-4196-92d4-fb4cf92e3223") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:46:58.170055 kubelet[2499]: E0130 13:46:58.170027 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.170055 kubelet[2499]: W0130 13:46:58.170043 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.170131 kubelet[2499]: E0130 13:46:58.170058 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.170273 kubelet[2499]: E0130 13:46:58.170255 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.170273 kubelet[2499]: W0130 13:46:58.170266 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.170329 kubelet[2499]: E0130 13:46:58.170273 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.170491 kubelet[2499]: E0130 13:46:58.170474 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.170491 kubelet[2499]: W0130 13:46:58.170484 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.170491 kubelet[2499]: E0130 13:46:58.170491 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.170688 kubelet[2499]: E0130 13:46:58.170670 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.170688 kubelet[2499]: W0130 13:46:58.170680 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.170688 kubelet[2499]: E0130 13:46:58.170688 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.170882 kubelet[2499]: E0130 13:46:58.170865 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.170882 kubelet[2499]: W0130 13:46:58.170874 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.170938 kubelet[2499]: E0130 13:46:58.170882 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.209151 kubelet[2499]: E0130 13:46:58.209114 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.209151 kubelet[2499]: W0130 13:46:58.209136 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.209151 kubelet[2499]: E0130 13:46:58.209157 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.271391 kubelet[2499]: E0130 13:46:58.271356 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.271391 kubelet[2499]: W0130 13:46:58.271375 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.271391 kubelet[2499]: E0130 13:46:58.271391 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.271620 kubelet[2499]: E0130 13:46:58.271596 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.271620 kubelet[2499]: W0130 13:46:58.271609 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.271620 kubelet[2499]: E0130 13:46:58.271618 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.271848 kubelet[2499]: E0130 13:46:58.271832 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.271848 kubelet[2499]: W0130 13:46:58.271842 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.271848 kubelet[2499]: E0130 13:46:58.271850 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.272127 kubelet[2499]: E0130 13:46:58.272112 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.272127 kubelet[2499]: W0130 13:46:58.272123 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.272176 kubelet[2499]: E0130 13:46:58.272131 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.272347 kubelet[2499]: E0130 13:46:58.272334 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.272347 kubelet[2499]: W0130 13:46:58.272344 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.272458 kubelet[2499]: E0130 13:46:58.272351 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.373223 kubelet[2499]: E0130 13:46:58.373106 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.373223 kubelet[2499]: W0130 13:46:58.373124 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.373223 kubelet[2499]: E0130 13:46:58.373144 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.373448 kubelet[2499]: E0130 13:46:58.373422 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.373448 kubelet[2499]: W0130 13:46:58.373437 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.373448 kubelet[2499]: E0130 13:46:58.373448 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.373685 kubelet[2499]: E0130 13:46:58.373661 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.373685 kubelet[2499]: W0130 13:46:58.373675 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.373685 kubelet[2499]: E0130 13:46:58.373684 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.374008 kubelet[2499]: E0130 13:46:58.373966 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.374156 kubelet[2499]: W0130 13:46:58.373987 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.374156 kubelet[2499]: E0130 13:46:58.374045 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.374454 kubelet[2499]: E0130 13:46:58.374417 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.374493 kubelet[2499]: W0130 13:46:58.374457 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.374519 kubelet[2499]: E0130 13:46:58.374491 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.475147 kubelet[2499]: E0130 13:46:58.475110 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.475147 kubelet[2499]: W0130 13:46:58.475137 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.475289 kubelet[2499]: E0130 13:46:58.475156 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.475504 kubelet[2499]: E0130 13:46:58.475489 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.475504 kubelet[2499]: W0130 13:46:58.475501 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.475554 kubelet[2499]: E0130 13:46:58.475510 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.475729 kubelet[2499]: E0130 13:46:58.475707 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.475729 kubelet[2499]: W0130 13:46:58.475721 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.475729 kubelet[2499]: E0130 13:46:58.475729 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.475938 kubelet[2499]: E0130 13:46:58.475925 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.475938 kubelet[2499]: W0130 13:46:58.475935 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.476028 kubelet[2499]: E0130 13:46:58.475943 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.476166 kubelet[2499]: E0130 13:46:58.476152 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.476166 kubelet[2499]: W0130 13:46:58.476163 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.476223 kubelet[2499]: E0130 13:46:58.476171 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.577668 kubelet[2499]: E0130 13:46:58.577618 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.577668 kubelet[2499]: W0130 13:46:58.577642 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.577668 kubelet[2499]: E0130 13:46:58.577664 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.577910 kubelet[2499]: E0130 13:46:58.577886 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.577910 kubelet[2499]: W0130 13:46:58.577897 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.577910 kubelet[2499]: E0130 13:46:58.577910 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.578272 kubelet[2499]: E0130 13:46:58.578238 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.578272 kubelet[2499]: W0130 13:46:58.578259 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.578343 kubelet[2499]: E0130 13:46:58.578296 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.578612 kubelet[2499]: E0130 13:46:58.578563 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.578612 kubelet[2499]: W0130 13:46:58.578578 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.578612 kubelet[2499]: E0130 13:46:58.578591 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.578801 kubelet[2499]: E0130 13:46:58.578789 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.578801 kubelet[2499]: W0130 13:46:58.578796 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.578845 kubelet[2499]: E0130 13:46:58.578809 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.579042 kubelet[2499]: E0130 13:46:58.579030 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.579042 kubelet[2499]: W0130 13:46:58.579039 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.579097 kubelet[2499]: E0130 13:46:58.579051 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.579358 kubelet[2499]: E0130 13:46:58.579341 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.579358 kubelet[2499]: W0130 13:46:58.579353 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.579407 kubelet[2499]: E0130 13:46:58.579367 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.579605 kubelet[2499]: E0130 13:46:58.579579 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.579605 kubelet[2499]: W0130 13:46:58.579593 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.579769 kubelet[2499]: E0130 13:46:58.579637 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.579850 kubelet[2499]: E0130 13:46:58.579835 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.579850 kubelet[2499]: W0130 13:46:58.579847 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.579897 kubelet[2499]: E0130 13:46:58.579861 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.580113 kubelet[2499]: E0130 13:46:58.580098 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.580113 kubelet[2499]: W0130 13:46:58.580110 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.580175 kubelet[2499]: E0130 13:46:58.580124 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.580327 kubelet[2499]: E0130 13:46:58.580313 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.580327 kubelet[2499]: W0130 13:46:58.580323 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.580371 kubelet[2499]: E0130 13:46:58.580335 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.580557 kubelet[2499]: E0130 13:46:58.580536 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.580557 kubelet[2499]: W0130 13:46:58.580549 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.580647 kubelet[2499]: E0130 13:46:58.580564 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.580773 kubelet[2499]: E0130 13:46:58.580757 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.580773 kubelet[2499]: W0130 13:46:58.580769 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.580835 kubelet[2499]: E0130 13:46:58.580783 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.580986 kubelet[2499]: E0130 13:46:58.580971 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.580986 kubelet[2499]: W0130 13:46:58.580982 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.581072 kubelet[2499]: E0130 13:46:58.581017 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.581244 kubelet[2499]: E0130 13:46:58.581226 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.581244 kubelet[2499]: W0130 13:46:58.581237 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.581308 kubelet[2499]: E0130 13:46:58.581249 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.581486 kubelet[2499]: E0130 13:46:58.581470 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.581523 kubelet[2499]: W0130 13:46:58.581491 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.581523 kubelet[2499]: E0130 13:46:58.581501 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.581852 kubelet[2499]: E0130 13:46:58.581835 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.581852 kubelet[2499]: W0130 13:46:58.581848 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.581954 kubelet[2499]: E0130 13:46:58.581858 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.582099 kubelet[2499]: E0130 13:46:58.582069 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.582099 kubelet[2499]: W0130 13:46:58.582078 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.582099 kubelet[2499]: E0130 13:46:58.582085 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.585080 kubelet[2499]: E0130 13:46:58.585053 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.585080 kubelet[2499]: W0130 13:46:58.585073 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.585163 kubelet[2499]: E0130 13:46:58.585086 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.586935 kubelet[2499]: E0130 13:46:58.586908 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.586935 kubelet[2499]: W0130 13:46:58.586927 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.587081 kubelet[2499]: E0130 13:46:58.586944 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.591227 kubelet[2499]: E0130 13:46:58.591196 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:46:58.681948 kubelet[2499]: E0130 13:46:58.681843 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.681948 kubelet[2499]: W0130 13:46:58.681865 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.681948 kubelet[2499]: E0130 13:46:58.681886 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.682268 kubelet[2499]: E0130 13:46:58.682170 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.682268 kubelet[2499]: W0130 13:46:58.682199 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.682268 kubelet[2499]: E0130 13:46:58.682217 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.682602 kubelet[2499]: E0130 13:46:58.682576 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.682602 kubelet[2499]: W0130 13:46:58.682592 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.682815 kubelet[2499]: E0130 13:46:58.682657 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.682958 kubelet[2499]: E0130 13:46:58.682931 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.682958 kubelet[2499]: W0130 13:46:58.682946 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.683095 kubelet[2499]: E0130 13:46:58.682964 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.683420 kubelet[2499]: E0130 13:46:58.683221 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.683420 kubelet[2499]: W0130 13:46:58.683255 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.683420 kubelet[2499]: E0130 13:46:58.683301 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.683626 kubelet[2499]: E0130 13:46:58.683599 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.683626 kubelet[2499]: W0130 13:46:58.683620 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.683702 kubelet[2499]: E0130 13:46:58.683650 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.683966 kubelet[2499]: E0130 13:46:58.683947 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.683966 kubelet[2499]: W0130 13:46:58.683962 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.684162 kubelet[2499]: E0130 13:46:58.683983 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.684316 kubelet[2499]: E0130 13:46:58.684272 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.684316 kubelet[2499]: W0130 13:46:58.684300 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.684391 kubelet[2499]: E0130 13:46:58.684320 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.684699 kubelet[2499]: E0130 13:46:58.684681 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.684699 kubelet[2499]: W0130 13:46:58.684693 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.684789 kubelet[2499]: E0130 13:46:58.684729 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.685037 kubelet[2499]: E0130 13:46:58.685011 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.685037 kubelet[2499]: W0130 13:46:58.685026 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.685037 kubelet[2499]: E0130 13:46:58.685038 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.685860 kubelet[2499]: E0130 13:46:58.685843 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.685860 kubelet[2499]: W0130 13:46:58.685856 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.685932 kubelet[2499]: E0130 13:46:58.685865 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.688590 kubelet[2499]: E0130 13:46:58.688569 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:46:58.688590 kubelet[2499]: W0130 13:46:58.688584 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:46:58.688590 kubelet[2499]: E0130 13:46:58.688593 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:46:58.740797 kubelet[2499]: E0130 13:46:58.740762 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:58.741708 containerd[1460]: time="2025-01-30T13:46:58.741347914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bfcf4cb7b-j5hsw,Uid:e2f201ee-2d39-4859-bee4-64580b5b48af,Namespace:calico-system,Attempt:0,}" Jan 30 13:46:58.795352 kubelet[2499]: E0130 13:46:58.795309 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:46:58.795769 containerd[1460]: time="2025-01-30T13:46:58.795726100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6rls9,Uid:67298bc6-2e04-4196-92d4-fb4cf92e3223,Namespace:calico-system,Attempt:0,}" Jan 30 13:47:00.502688 containerd[1460]: time="2025-01-30T13:47:00.502572173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:00.503227 containerd[1460]: time="2025-01-30T13:47:00.503038735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:00.503227 containerd[1460]: time="2025-01-30T13:47:00.503063091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:00.503227 containerd[1460]: time="2025-01-30T13:47:00.503190332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:00.529226 systemd[1]: Started cri-containerd-05849f136746bf9e46debf66f64ebae57ade61f637fefbbfbb1cd766bb0865f5.scope - libcontainer container 05849f136746bf9e46debf66f64ebae57ade61f637fefbbfbb1cd766bb0865f5. Jan 30 13:47:00.555916 containerd[1460]: time="2025-01-30T13:47:00.555616746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:00.555916 containerd[1460]: time="2025-01-30T13:47:00.555676599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:00.555916 containerd[1460]: time="2025-01-30T13:47:00.555689845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:00.556774 containerd[1460]: time="2025-01-30T13:47:00.556278467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:00.572857 containerd[1460]: time="2025-01-30T13:47:00.572811045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bfcf4cb7b-j5hsw,Uid:e2f201ee-2d39-4859-bee4-64580b5b48af,Namespace:calico-system,Attempt:0,} returns sandbox id \"05849f136746bf9e46debf66f64ebae57ade61f637fefbbfbb1cd766bb0865f5\"" Jan 30 13:47:00.573465 kubelet[2499]: E0130 13:47:00.573427 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:00.574767 containerd[1460]: time="2025-01-30T13:47:00.574647590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:47:00.582131 systemd[1]: Started cri-containerd-ff06372edf47effad598b27a36c269242615ef2049c8136ffd1286843f12ad38.scope - libcontainer container ff06372edf47effad598b27a36c269242615ef2049c8136ffd1286843f12ad38. Jan 30 13:47:00.592004 kubelet[2499]: E0130 13:47:00.591925 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:47:00.606438 containerd[1460]: time="2025-01-30T13:47:00.606363203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6rls9,Uid:67298bc6-2e04-4196-92d4-fb4cf92e3223,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff06372edf47effad598b27a36c269242615ef2049c8136ffd1286843f12ad38\"" Jan 30 13:47:00.607263 kubelet[2499]: E0130 13:47:00.607222 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:02.591067 kubelet[2499]: E0130 13:47:02.590984 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:47:02.623693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373954967.mount: Deactivated successfully. Jan 30 13:47:04.177235 containerd[1460]: time="2025-01-30T13:47:04.177161948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:04.178178 containerd[1460]: time="2025-01-30T13:47:04.178120157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:47:04.179474 containerd[1460]: time="2025-01-30T13:47:04.179440940Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:04.182046 containerd[1460]: time="2025-01-30T13:47:04.182001733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:04.182832 containerd[1460]: time="2025-01-30T13:47:04.182703138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.608020432s" Jan 30 13:47:04.182832 containerd[1460]: time="2025-01-30T13:47:04.182752621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:47:04.183835 containerd[1460]: time="2025-01-30T13:47:04.183798416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:47:04.192674 containerd[1460]: time="2025-01-30T13:47:04.191488151Z" level=info msg="CreateContainer within sandbox \"05849f136746bf9e46debf66f64ebae57ade61f637fefbbfbb1cd766bb0865f5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:47:04.207590 containerd[1460]: time="2025-01-30T13:47:04.207536355Z" level=info msg="CreateContainer within sandbox \"05849f136746bf9e46debf66f64ebae57ade61f637fefbbfbb1cd766bb0865f5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9e9abfc3332ed4f822da6325c7cc642361b8d1183644c61a8de45abc1fa6d285\"" Jan 30 13:47:04.208237 containerd[1460]: time="2025-01-30T13:47:04.208161154Z" level=info msg="StartContainer for \"9e9abfc3332ed4f822da6325c7cc642361b8d1183644c61a8de45abc1fa6d285\"" Jan 30 13:47:04.238185 systemd[1]: Started cri-containerd-9e9abfc3332ed4f822da6325c7cc642361b8d1183644c61a8de45abc1fa6d285.scope - libcontainer container 9e9abfc3332ed4f822da6325c7cc642361b8d1183644c61a8de45abc1fa6d285. Jan 30 13:47:04.284894 containerd[1460]: time="2025-01-30T13:47:04.284847773Z" level=info msg="StartContainer for \"9e9abfc3332ed4f822da6325c7cc642361b8d1183644c61a8de45abc1fa6d285\" returns successfully" Jan 30 13:47:04.591592 kubelet[2499]: E0130 13:47:04.591546 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:47:04.636540 kubelet[2499]: E0130 13:47:04.636502 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:04.648177 kubelet[2499]: I0130 13:47:04.648101 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bfcf4cb7b-j5hsw" podStartSLOduration=5.038848466 podStartE2EDuration="8.648085066s" podCreationTimestamp="2025-01-30 13:46:56 +0000 UTC" firstStartedPulling="2025-01-30 13:47:00.57432563 +0000 UTC m=+16.069247211" lastFinishedPulling="2025-01-30 13:47:04.18356223 +0000 UTC m=+19.678483811" observedRunningTime="2025-01-30 13:47:04.64744625 +0000 UTC m=+20.142367831" watchObservedRunningTime="2025-01-30 13:47:04.648085066 +0000 UTC m=+20.143006647" Jan 30 13:47:04.690168 kubelet[2499]: E0130 13:47:04.690116 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.690168 kubelet[2499]: W0130 13:47:04.690149 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.690168 kubelet[2499]: E0130 13:47:04.690174 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.690472 kubelet[2499]: E0130 13:47:04.690450 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.690472 kubelet[2499]: W0130 13:47:04.690465 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.690524 kubelet[2499]: E0130 13:47:04.690475 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.690760 kubelet[2499]: E0130 13:47:04.690738 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.690760 kubelet[2499]: W0130 13:47:04.690754 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.690812 kubelet[2499]: E0130 13:47:04.690766 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.691064 kubelet[2499]: E0130 13:47:04.691047 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.691064 kubelet[2499]: W0130 13:47:04.691062 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.691123 kubelet[2499]: E0130 13:47:04.691073 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.691357 kubelet[2499]: E0130 13:47:04.691332 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.691357 kubelet[2499]: W0130 13:47:04.691348 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.691409 kubelet[2499]: E0130 13:47:04.691359 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.691602 kubelet[2499]: E0130 13:47:04.691586 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.691625 kubelet[2499]: W0130 13:47:04.691600 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.691625 kubelet[2499]: E0130 13:47:04.691612 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.691853 kubelet[2499]: E0130 13:47:04.691831 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.691853 kubelet[2499]: W0130 13:47:04.691846 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.691897 kubelet[2499]: E0130 13:47:04.691859 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.692120 kubelet[2499]: E0130 13:47:04.692104 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.692120 kubelet[2499]: W0130 13:47:04.692118 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.692168 kubelet[2499]: E0130 13:47:04.692130 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.692401 kubelet[2499]: E0130 13:47:04.692381 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.692401 kubelet[2499]: W0130 13:47:04.692396 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.692468 kubelet[2499]: E0130 13:47:04.692407 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.692650 kubelet[2499]: E0130 13:47:04.692634 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.692650 kubelet[2499]: W0130 13:47:04.692647 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.692701 kubelet[2499]: E0130 13:47:04.692660 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.692881 kubelet[2499]: E0130 13:47:04.692865 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.692881 kubelet[2499]: W0130 13:47:04.692878 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.692939 kubelet[2499]: E0130 13:47:04.692889 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.693125 kubelet[2499]: E0130 13:47:04.693109 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.693125 kubelet[2499]: W0130 13:47:04.693123 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.693179 kubelet[2499]: E0130 13:47:04.693134 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.693377 kubelet[2499]: E0130 13:47:04.693358 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.693377 kubelet[2499]: W0130 13:47:04.693372 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.693436 kubelet[2499]: E0130 13:47:04.693382 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.693605 kubelet[2499]: E0130 13:47:04.693590 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.693631 kubelet[2499]: W0130 13:47:04.693604 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.693631 kubelet[2499]: E0130 13:47:04.693614 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.693829 kubelet[2499]: E0130 13:47:04.693814 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.693851 kubelet[2499]: W0130 13:47:04.693830 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.693851 kubelet[2499]: E0130 13:47:04.693842 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.722336 kubelet[2499]: E0130 13:47:04.722304 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.722336 kubelet[2499]: W0130 13:47:04.722324 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.722453 kubelet[2499]: E0130 13:47:04.722348 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.722611 kubelet[2499]: E0130 13:47:04.722595 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.722611 kubelet[2499]: W0130 13:47:04.722605 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.722675 kubelet[2499]: E0130 13:47:04.722618 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.722960 kubelet[2499]: E0130 13:47:04.722930 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.723020 kubelet[2499]: W0130 13:47:04.722957 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.723020 kubelet[2499]: E0130 13:47:04.722984 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.723194 kubelet[2499]: E0130 13:47:04.723182 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.723194 kubelet[2499]: W0130 13:47:04.723191 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.723263 kubelet[2499]: E0130 13:47:04.723213 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.723441 kubelet[2499]: E0130 13:47:04.723430 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.723441 kubelet[2499]: W0130 13:47:04.723439 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.723491 kubelet[2499]: E0130 13:47:04.723450 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.723691 kubelet[2499]: E0130 13:47:04.723679 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.723691 kubelet[2499]: W0130 13:47:04.723689 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.723754 kubelet[2499]: E0130 13:47:04.723703 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.724054 kubelet[2499]: E0130 13:47:04.723945 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.724054 kubelet[2499]: W0130 13:47:04.723970 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.724054 kubelet[2499]: E0130 13:47:04.723985 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.724271 kubelet[2499]: E0130 13:47:04.724255 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.724319 kubelet[2499]: W0130 13:47:04.724271 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.724349 kubelet[2499]: E0130 13:47:04.724322 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.724508 kubelet[2499]: E0130 13:47:04.724487 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.724508 kubelet[2499]: W0130 13:47:04.724498 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.724648 kubelet[2499]: E0130 13:47:04.724524 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.724719 kubelet[2499]: E0130 13:47:04.724703 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.724719 kubelet[2499]: W0130 13:47:04.724717 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.724768 kubelet[2499]: E0130 13:47:04.724732 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.724977 kubelet[2499]: E0130 13:47:04.724962 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.724977 kubelet[2499]: W0130 13:47:04.724974 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.725065 kubelet[2499]: E0130 13:47:04.725007 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.725303 kubelet[2499]: E0130 13:47:04.725288 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.725303 kubelet[2499]: W0130 13:47:04.725300 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.725367 kubelet[2499]: E0130 13:47:04.725315 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.725570 kubelet[2499]: E0130 13:47:04.725554 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.725570 kubelet[2499]: W0130 13:47:04.725569 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.725627 kubelet[2499]: E0130 13:47:04.725586 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.725813 kubelet[2499]: E0130 13:47:04.725797 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.725847 kubelet[2499]: W0130 13:47:04.725813 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.725847 kubelet[2499]: E0130 13:47:04.725830 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.726104 kubelet[2499]: E0130 13:47:04.726088 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.726104 kubelet[2499]: W0130 13:47:04.726101 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.726170 kubelet[2499]: E0130 13:47:04.726115 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.726448 kubelet[2499]: E0130 13:47:04.726432 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.726448 kubelet[2499]: W0130 13:47:04.726444 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.726519 kubelet[2499]: E0130 13:47:04.726467 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.726747 kubelet[2499]: E0130 13:47:04.726732 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.726747 kubelet[2499]: W0130 13:47:04.726745 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.726797 kubelet[2499]: E0130 13:47:04.726762 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:04.726967 kubelet[2499]: E0130 13:47:04.726952 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:04.726967 kubelet[2499]: W0130 13:47:04.726964 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:04.727037 kubelet[2499]: E0130 13:47:04.726973 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.637455 kubelet[2499]: I0130 13:47:05.637415 2499 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:47:05.637865 kubelet[2499]: E0130 13:47:05.637768 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:05.698240 kubelet[2499]: E0130 13:47:05.698178 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.698240 kubelet[2499]: W0130 13:47:05.698226 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.698240 kubelet[2499]: E0130 13:47:05.698249 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.698497 kubelet[2499]: E0130 13:47:05.698483 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.698497 kubelet[2499]: W0130 13:47:05.698494 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.698579 kubelet[2499]: E0130 13:47:05.698504 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.698731 kubelet[2499]: E0130 13:47:05.698716 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.698731 kubelet[2499]: W0130 13:47:05.698727 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.698811 kubelet[2499]: E0130 13:47:05.698738 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.698960 kubelet[2499]: E0130 13:47:05.698946 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.698960 kubelet[2499]: W0130 13:47:05.698957 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.699109 kubelet[2499]: E0130 13:47:05.698967 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.699243 kubelet[2499]: E0130 13:47:05.699224 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.699243 kubelet[2499]: W0130 13:47:05.699236 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.699317 kubelet[2499]: E0130 13:47:05.699246 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.699457 kubelet[2499]: E0130 13:47:05.699440 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.699457 kubelet[2499]: W0130 13:47:05.699451 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.699538 kubelet[2499]: E0130 13:47:05.699460 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.699684 kubelet[2499]: E0130 13:47:05.699666 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.699684 kubelet[2499]: W0130 13:47:05.699677 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.699751 kubelet[2499]: E0130 13:47:05.699688 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.699904 kubelet[2499]: E0130 13:47:05.699887 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.699904 kubelet[2499]: W0130 13:47:05.699898 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.699986 kubelet[2499]: E0130 13:47:05.699907 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.700151 kubelet[2499]: E0130 13:47:05.700133 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.700151 kubelet[2499]: W0130 13:47:05.700144 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.700240 kubelet[2499]: E0130 13:47:05.700154 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.700388 kubelet[2499]: E0130 13:47:05.700371 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.700388 kubelet[2499]: W0130 13:47:05.700382 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.700464 kubelet[2499]: E0130 13:47:05.700392 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.700618 kubelet[2499]: E0130 13:47:05.700601 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.700618 kubelet[2499]: W0130 13:47:05.700611 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.700684 kubelet[2499]: E0130 13:47:05.700621 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.700840 kubelet[2499]: E0130 13:47:05.700823 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.700840 kubelet[2499]: W0130 13:47:05.700834 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.700921 kubelet[2499]: E0130 13:47:05.700843 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.701092 kubelet[2499]: E0130 13:47:05.701074 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.701092 kubelet[2499]: W0130 13:47:05.701085 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.701159 kubelet[2499]: E0130 13:47:05.701095 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.701324 kubelet[2499]: E0130 13:47:05.701307 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.701324 kubelet[2499]: W0130 13:47:05.701318 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.701405 kubelet[2499]: E0130 13:47:05.701327 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.701546 kubelet[2499]: E0130 13:47:05.701529 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.701546 kubelet[2499]: W0130 13:47:05.701540 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.701621 kubelet[2499]: E0130 13:47:05.701549 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.733506 kubelet[2499]: E0130 13:47:05.733459 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.733506 kubelet[2499]: W0130 13:47:05.733483 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.733506 kubelet[2499]: E0130 13:47:05.733504 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.734253 kubelet[2499]: E0130 13:47:05.733734 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.734253 kubelet[2499]: W0130 13:47:05.733744 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.734253 kubelet[2499]: E0130 13:47:05.733759 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.734253 kubelet[2499]: E0130 13:47:05.733984 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.734253 kubelet[2499]: W0130 13:47:05.734013 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.734253 kubelet[2499]: E0130 13:47:05.734031 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.734493 kubelet[2499]: E0130 13:47:05.734306 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.734493 kubelet[2499]: W0130 13:47:05.734316 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.734493 kubelet[2499]: E0130 13:47:05.734332 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.734583 kubelet[2499]: E0130 13:47:05.734544 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.734583 kubelet[2499]: W0130 13:47:05.734553 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.734583 kubelet[2499]: E0130 13:47:05.734568 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.734786 kubelet[2499]: E0130 13:47:05.734760 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.734786 kubelet[2499]: W0130 13:47:05.734772 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.734856 kubelet[2499]: E0130 13:47:05.734800 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.735010 kubelet[2499]: E0130 13:47:05.734964 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.735010 kubelet[2499]: W0130 13:47:05.734973 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.735084 kubelet[2499]: E0130 13:47:05.735008 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.735182 kubelet[2499]: E0130 13:47:05.735168 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.735182 kubelet[2499]: W0130 13:47:05.735176 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.735261 kubelet[2499]: E0130 13:47:05.735209 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.735389 kubelet[2499]: E0130 13:47:05.735371 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.735389 kubelet[2499]: W0130 13:47:05.735380 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.735457 kubelet[2499]: E0130 13:47:05.735391 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.735647 kubelet[2499]: E0130 13:47:05.735616 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.735647 kubelet[2499]: W0130 13:47:05.735630 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.735647 kubelet[2499]: E0130 13:47:05.735647 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.735870 kubelet[2499]: E0130 13:47:05.735850 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.735870 kubelet[2499]: W0130 13:47:05.735861 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.735939 kubelet[2499]: E0130 13:47:05.735875 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.736125 kubelet[2499]: E0130 13:47:05.736104 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.736125 kubelet[2499]: W0130 13:47:05.736118 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.736208 kubelet[2499]: E0130 13:47:05.736134 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.736348 kubelet[2499]: E0130 13:47:05.736330 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.736348 kubelet[2499]: W0130 13:47:05.736341 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.736407 kubelet[2499]: E0130 13:47:05.736356 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.736569 kubelet[2499]: E0130 13:47:05.736553 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.736569 kubelet[2499]: W0130 13:47:05.736562 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.736632 kubelet[2499]: E0130 13:47:05.736574 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.736843 kubelet[2499]: E0130 13:47:05.736829 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.736843 kubelet[2499]: W0130 13:47:05.736839 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.736915 kubelet[2499]: E0130 13:47:05.736851 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.737050 kubelet[2499]: E0130 13:47:05.737038 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.737050 kubelet[2499]: W0130 13:47:05.737047 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.737106 kubelet[2499]: E0130 13:47:05.737058 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.737247 kubelet[2499]: E0130 13:47:05.737234 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.737247 kubelet[2499]: W0130 13:47:05.737243 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.737322 kubelet[2499]: E0130 13:47:05.737251 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:05.737499 kubelet[2499]: E0130 13:47:05.737486 2499 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:47:05.737528 kubelet[2499]: W0130 13:47:05.737498 2499 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:47:05.737528 kubelet[2499]: E0130 13:47:05.737508 2499 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:47:06.399327 containerd[1460]: time="2025-01-30T13:47:06.399073587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:06.399962 containerd[1460]: time="2025-01-30T13:47:06.399928128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:47:06.401170 containerd[1460]: time="2025-01-30T13:47:06.401141186Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:06.403433 containerd[1460]: time="2025-01-30T13:47:06.403390348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:06.404060 containerd[1460]: time="2025-01-30T13:47:06.404031217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.220203296s" Jan 30 13:47:06.404101 containerd[1460]: time="2025-01-30T13:47:06.404063468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:47:06.406081 containerd[1460]: time="2025-01-30T13:47:06.406060885Z" level=info msg="CreateContainer within sandbox \"ff06372edf47effad598b27a36c269242615ef2049c8136ffd1286843f12ad38\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:47:06.476450 containerd[1460]: time="2025-01-30T13:47:06.476402392Z" level=info msg="CreateContainer within sandbox \"ff06372edf47effad598b27a36c269242615ef2049c8136ffd1286843f12ad38\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"47af98444e5f79408a5f88b2d04f4dae9e756796fa879c424baf20012a966ae7\"" Jan 30 13:47:06.477163 containerd[1460]: time="2025-01-30T13:47:06.477103274Z" level=info msg="StartContainer for \"47af98444e5f79408a5f88b2d04f4dae9e756796fa879c424baf20012a966ae7\"" Jan 30 13:47:06.512227 systemd[1]: Started cri-containerd-47af98444e5f79408a5f88b2d04f4dae9e756796fa879c424baf20012a966ae7.scope - libcontainer container 47af98444e5f79408a5f88b2d04f4dae9e756796fa879c424baf20012a966ae7. Jan 30 13:47:06.561778 systemd[1]: cri-containerd-47af98444e5f79408a5f88b2d04f4dae9e756796fa879c424baf20012a966ae7.scope: Deactivated successfully. Jan 30 13:47:06.575235 containerd[1460]: time="2025-01-30T13:47:06.575197708Z" level=info msg="StartContainer for \"47af98444e5f79408a5f88b2d04f4dae9e756796fa879c424baf20012a966ae7\" returns successfully" Jan 30 13:47:06.592954 kubelet[2499]: E0130 13:47:06.592900 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:47:06.595108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47af98444e5f79408a5f88b2d04f4dae9e756796fa879c424baf20012a966ae7-rootfs.mount: Deactivated successfully. Jan 30 13:47:06.640539 kubelet[2499]: E0130 13:47:06.640487 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:07.161746 containerd[1460]: time="2025-01-30T13:47:07.161672016Z" level=info msg="shim disconnected" id=47af98444e5f79408a5f88b2d04f4dae9e756796fa879c424baf20012a966ae7 namespace=k8s.io Jan 30 13:47:07.161746 containerd[1460]: time="2025-01-30T13:47:07.161733763Z" level=warning msg="cleaning up after shim disconnected" id=47af98444e5f79408a5f88b2d04f4dae9e756796fa879c424baf20012a966ae7 namespace=k8s.io Jan 30 13:47:07.161746 containerd[1460]: time="2025-01-30T13:47:07.161742610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:07.643634 kubelet[2499]: E0130 13:47:07.643597 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:07.644424 containerd[1460]: time="2025-01-30T13:47:07.644394132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:47:08.593382 kubelet[2499]: E0130 13:47:08.593337 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:47:10.591202 kubelet[2499]: E0130 13:47:10.591161 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:47:12.592592 kubelet[2499]: E0130 13:47:12.592503 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:47:13.453179 containerd[1460]: time="2025-01-30T13:47:13.453124462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:13.454033 containerd[1460]: time="2025-01-30T13:47:13.454002676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:47:13.455190 containerd[1460]: time="2025-01-30T13:47:13.455156297Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:13.457512 containerd[1460]: time="2025-01-30T13:47:13.457478308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:13.458171 containerd[1460]: time="2025-01-30T13:47:13.458141255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.813710174s" Jan 30 13:47:13.458205 containerd[1460]: time="2025-01-30T13:47:13.458171672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:47:13.460029 containerd[1460]: time="2025-01-30T13:47:13.459978674Z" level=info msg="CreateContainer within sandbox \"ff06372edf47effad598b27a36c269242615ef2049c8136ffd1286843f12ad38\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:47:13.476567 containerd[1460]: time="2025-01-30T13:47:13.476495743Z" level=info msg="CreateContainer within sandbox \"ff06372edf47effad598b27a36c269242615ef2049c8136ffd1286843f12ad38\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"431245829b10fcb9a91406af8186098474aa09f30c7e9c1b10b3bf8250382e87\"" Jan 30 13:47:13.477101 containerd[1460]: time="2025-01-30T13:47:13.477062189Z" level=info msg="StartContainer for \"431245829b10fcb9a91406af8186098474aa09f30c7e9c1b10b3bf8250382e87\"" Jan 30 13:47:13.502094 systemd[1]: run-containerd-runc-k8s.io-431245829b10fcb9a91406af8186098474aa09f30c7e9c1b10b3bf8250382e87-runc.8uwbal.mount: Deactivated successfully. Jan 30 13:47:13.508210 systemd[1]: Started cri-containerd-431245829b10fcb9a91406af8186098474aa09f30c7e9c1b10b3bf8250382e87.scope - libcontainer container 431245829b10fcb9a91406af8186098474aa09f30c7e9c1b10b3bf8250382e87. Jan 30 13:47:13.537511 containerd[1460]: time="2025-01-30T13:47:13.537465223Z" level=info msg="StartContainer for \"431245829b10fcb9a91406af8186098474aa09f30c7e9c1b10b3bf8250382e87\" returns successfully" Jan 30 13:47:14.050802 kubelet[2499]: E0130 13:47:14.049860 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:15.048502 kubelet[2499]: E0130 13:47:15.047546 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:47:15.052939 kubelet[2499]: E0130 13:47:15.052901 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:15.686673 containerd[1460]: time="2025-01-30T13:47:15.686633164Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:47:15.689382 systemd[1]: cri-containerd-431245829b10fcb9a91406af8186098474aa09f30c7e9c1b10b3bf8250382e87.scope: Deactivated successfully. Jan 30 13:47:15.709380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-431245829b10fcb9a91406af8186098474aa09f30c7e9c1b10b3bf8250382e87-rootfs.mount: Deactivated successfully. Jan 30 13:47:15.782977 kubelet[2499]: I0130 13:47:15.782698 2499 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:47:16.055313 systemd[1]: Created slice kubepods-burstable-pod7f34dd70_d328_492f_8c87_2756a28b76b5.slice - libcontainer container kubepods-burstable-pod7f34dd70_d328_492f_8c87_2756a28b76b5.slice. Jan 30 13:47:16.060522 systemd[1]: Created slice kubepods-besteffort-pod77b64aea_0119_4203_b1a4_d349995e60a1.slice - libcontainer container kubepods-besteffort-pod77b64aea_0119_4203_b1a4_d349995e60a1.slice. Jan 30 13:47:16.065448 systemd[1]: Created slice kubepods-burstable-pod84baafcc_c8f4_413e_80cf_ae1a5f5e4140.slice - libcontainer container kubepods-burstable-pod84baafcc_c8f4_413e_80cf_ae1a5f5e4140.slice. Jan 30 13:47:16.072080 systemd[1]: Created slice kubepods-besteffort-podb735763f_2a7c_4c9a_9b44_d3680f2a86f5.slice - libcontainer container kubepods-besteffort-podb735763f_2a7c_4c9a_9b44_d3680f2a86f5.slice. Jan 30 13:47:16.076079 systemd[1]: Created slice kubepods-besteffort-pod313a082c_52c3_48c2_8128_4216401a9378.slice - libcontainer container kubepods-besteffort-pod313a082c_52c3_48c2_8128_4216401a9378.slice. Jan 30 13:47:16.076363 containerd[1460]: time="2025-01-30T13:47:16.076303997Z" level=info msg="shim disconnected" id=431245829b10fcb9a91406af8186098474aa09f30c7e9c1b10b3bf8250382e87 namespace=k8s.io Jan 30 13:47:16.076515 containerd[1460]: time="2025-01-30T13:47:16.076366024Z" level=warning msg="cleaning up after shim disconnected" id=431245829b10fcb9a91406af8186098474aa09f30c7e9c1b10b3bf8250382e87 namespace=k8s.io Jan 30 13:47:16.076515 containerd[1460]: time="2025-01-30T13:47:16.076375732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:47:16.124727 kubelet[2499]: I0130 13:47:16.124648 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6ss\" (UniqueName: \"kubernetes.io/projected/84baafcc-c8f4-413e-80cf-ae1a5f5e4140-kube-api-access-rv6ss\") pod \"coredns-668d6bf9bc-wtmtq\" (UID: \"84baafcc-c8f4-413e-80cf-ae1a5f5e4140\") " pod="kube-system/coredns-668d6bf9bc-wtmtq" Jan 30 13:47:16.124727 kubelet[2499]: I0130 13:47:16.124710 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77b64aea-0119-4203-b1a4-d349995e60a1-tigera-ca-bundle\") pod \"calico-kube-controllers-dcb74df-bvbvk\" (UID: \"77b64aea-0119-4203-b1a4-d349995e60a1\") " pod="calico-system/calico-kube-controllers-dcb74df-bvbvk" Jan 30 13:47:16.124727 kubelet[2499]: I0130 13:47:16.124726 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh8d4\" (UniqueName: \"kubernetes.io/projected/77b64aea-0119-4203-b1a4-d349995e60a1-kube-api-access-vh8d4\") pod \"calico-kube-controllers-dcb74df-bvbvk\" (UID: \"77b64aea-0119-4203-b1a4-d349995e60a1\") " pod="calico-system/calico-kube-controllers-dcb74df-bvbvk" Jan 30 13:47:16.125192 kubelet[2499]: I0130 13:47:16.124745 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/313a082c-52c3-48c2-8128-4216401a9378-calico-apiserver-certs\") pod \"calico-apiserver-6589f8c55-lv4xg\" (UID: \"313a082c-52c3-48c2-8128-4216401a9378\") " pod="calico-apiserver/calico-apiserver-6589f8c55-lv4xg" Jan 30 13:47:16.125192 kubelet[2499]: I0130 13:47:16.124765 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6krv\" (UniqueName: \"kubernetes.io/projected/7f34dd70-d328-492f-8c87-2756a28b76b5-kube-api-access-f6krv\") pod \"coredns-668d6bf9bc-x2bcp\" (UID: \"7f34dd70-d328-492f-8c87-2756a28b76b5\") " pod="kube-system/coredns-668d6bf9bc-x2bcp" Jan 30 13:47:16.125192 kubelet[2499]: I0130 13:47:16.124784 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vhm9\" (UniqueName: \"kubernetes.io/projected/313a082c-52c3-48c2-8128-4216401a9378-kube-api-access-6vhm9\") pod \"calico-apiserver-6589f8c55-lv4xg\" (UID: \"313a082c-52c3-48c2-8128-4216401a9378\") " pod="calico-apiserver/calico-apiserver-6589f8c55-lv4xg" Jan 30 13:47:16.125192 kubelet[2499]: I0130 13:47:16.124808 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f34dd70-d328-492f-8c87-2756a28b76b5-config-volume\") pod \"coredns-668d6bf9bc-x2bcp\" (UID: \"7f34dd70-d328-492f-8c87-2756a28b76b5\") " pod="kube-system/coredns-668d6bf9bc-x2bcp" Jan 30 13:47:16.125192 kubelet[2499]: I0130 13:47:16.124824 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b735763f-2a7c-4c9a-9b44-d3680f2a86f5-calico-apiserver-certs\") pod \"calico-apiserver-6589f8c55-wzzjd\" (UID: \"b735763f-2a7c-4c9a-9b44-d3680f2a86f5\") " pod="calico-apiserver/calico-apiserver-6589f8c55-wzzjd" Jan 30 13:47:16.125316 kubelet[2499]: I0130 13:47:16.124843 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2tj\" (UniqueName: \"kubernetes.io/projected/b735763f-2a7c-4c9a-9b44-d3680f2a86f5-kube-api-access-5l2tj\") pod \"calico-apiserver-6589f8c55-wzzjd\" (UID: \"b735763f-2a7c-4c9a-9b44-d3680f2a86f5\") " pod="calico-apiserver/calico-apiserver-6589f8c55-wzzjd" Jan 30 13:47:16.125316 kubelet[2499]: I0130 13:47:16.124856 2499 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84baafcc-c8f4-413e-80cf-ae1a5f5e4140-config-volume\") pod \"coredns-668d6bf9bc-wtmtq\" (UID: \"84baafcc-c8f4-413e-80cf-ae1a5f5e4140\") " pod="kube-system/coredns-668d6bf9bc-wtmtq" Jan 30 13:47:16.597872 systemd[1]: Created slice kubepods-besteffort-pod3feaebfa_27ef_455c_82db_977542f57659.slice - libcontainer container kubepods-besteffort-pod3feaebfa_27ef_455c_82db_977542f57659.slice. Jan 30 13:47:16.602705 containerd[1460]: time="2025-01-30T13:47:16.602660601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t9h8r,Uid:3feaebfa-27ef-455c-82db-977542f57659,Namespace:calico-system,Attempt:0,}" Jan 30 13:47:16.658372 kubelet[2499]: E0130 13:47:16.658338 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:16.659331 containerd[1460]: time="2025-01-30T13:47:16.658851107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x2bcp,Uid:7f34dd70-d328-492f-8c87-2756a28b76b5,Namespace:kube-system,Attempt:0,}" Jan 30 13:47:16.663518 containerd[1460]: time="2025-01-30T13:47:16.663462422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dcb74df-bvbvk,Uid:77b64aea-0119-4203-b1a4-d349995e60a1,Namespace:calico-system,Attempt:0,}" Jan 30 13:47:16.668091 kubelet[2499]: E0130 13:47:16.668004 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:16.668971 containerd[1460]: time="2025-01-30T13:47:16.668945776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wtmtq,Uid:84baafcc-c8f4-413e-80cf-ae1a5f5e4140,Namespace:kube-system,Attempt:0,}" Jan 30 13:47:16.674821 containerd[1460]: time="2025-01-30T13:47:16.674756827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6589f8c55-wzzjd,Uid:b735763f-2a7c-4c9a-9b44-d3680f2a86f5,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:47:16.679404 containerd[1460]: time="2025-01-30T13:47:16.679368000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6589f8c55-lv4xg,Uid:313a082c-52c3-48c2-8128-4216401a9378,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:47:16.694079 containerd[1460]: time="2025-01-30T13:47:16.694023087Z" level=error msg="Failed to destroy network for sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.694467 containerd[1460]: time="2025-01-30T13:47:16.694417339Z" level=error msg="encountered an error cleaning up failed sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.694497 containerd[1460]: time="2025-01-30T13:47:16.694464547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t9h8r,Uid:3feaebfa-27ef-455c-82db-977542f57659,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.694711 kubelet[2499]: E0130 13:47:16.694669 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.694768 kubelet[2499]: E0130 13:47:16.694738 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t9h8r" Jan 30 13:47:16.694798 kubelet[2499]: E0130 13:47:16.694763 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t9h8r" Jan 30 13:47:16.695104 kubelet[2499]: E0130 13:47:16.694831 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t9h8r_calico-system(3feaebfa-27ef-455c-82db-977542f57659)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t9h8r_calico-system(3feaebfa-27ef-455c-82db-977542f57659)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:47:16.770771 containerd[1460]: time="2025-01-30T13:47:16.770712353Z" level=error msg="Failed to destroy network for sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.771497 containerd[1460]: time="2025-01-30T13:47:16.771461091Z" level=error msg="encountered an error cleaning up failed sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.771555 containerd[1460]: time="2025-01-30T13:47:16.771513930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x2bcp,Uid:7f34dd70-d328-492f-8c87-2756a28b76b5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.772353 kubelet[2499]: E0130 13:47:16.772023 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.772353 kubelet[2499]: E0130 13:47:16.772080 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x2bcp" Jan 30 13:47:16.772353 kubelet[2499]: E0130 13:47:16.772100 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x2bcp" Jan 30 13:47:16.772489 kubelet[2499]: E0130 13:47:16.772140 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x2bcp_kube-system(7f34dd70-d328-492f-8c87-2756a28b76b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x2bcp_kube-system(7f34dd70-d328-492f-8c87-2756a28b76b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x2bcp" podUID="7f34dd70-d328-492f-8c87-2756a28b76b5" Jan 30 13:47:16.784296 containerd[1460]: time="2025-01-30T13:47:16.784231543Z" level=error msg="Failed to destroy network for sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.785458 containerd[1460]: time="2025-01-30T13:47:16.785410590Z" level=error msg="encountered an error cleaning up failed sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.785508 containerd[1460]: time="2025-01-30T13:47:16.785479750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dcb74df-bvbvk,Uid:77b64aea-0119-4203-b1a4-d349995e60a1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.785785 kubelet[2499]: E0130 13:47:16.785739 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.785854 kubelet[2499]: E0130 13:47:16.785807 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dcb74df-bvbvk" Jan 30 13:47:16.785854 kubelet[2499]: E0130 13:47:16.785831 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dcb74df-bvbvk" Jan 30 13:47:16.785925 kubelet[2499]: E0130 13:47:16.785874 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dcb74df-bvbvk_calico-system(77b64aea-0119-4203-b1a4-d349995e60a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dcb74df-bvbvk_calico-system(77b64aea-0119-4203-b1a4-d349995e60a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dcb74df-bvbvk" podUID="77b64aea-0119-4203-b1a4-d349995e60a1" Jan 30 13:47:16.811619 containerd[1460]: time="2025-01-30T13:47:16.811532086Z" level=error msg="Failed to destroy network for sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.812195 containerd[1460]: time="2025-01-30T13:47:16.812157301Z" level=error msg="encountered an error cleaning up failed sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.812267 containerd[1460]: time="2025-01-30T13:47:16.812209029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6589f8c55-wzzjd,Uid:b735763f-2a7c-4c9a-9b44-d3680f2a86f5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.812903 kubelet[2499]: E0130 13:47:16.812506 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.812903 kubelet[2499]: E0130 13:47:16.812567 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6589f8c55-wzzjd" Jan 30 13:47:16.812903 kubelet[2499]: E0130 13:47:16.812589 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6589f8c55-wzzjd" Jan 30 13:47:16.813284 kubelet[2499]: E0130 13:47:16.812634 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6589f8c55-wzzjd_calico-apiserver(b735763f-2a7c-4c9a-9b44-d3680f2a86f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6589f8c55-wzzjd_calico-apiserver(b735763f-2a7c-4c9a-9b44-d3680f2a86f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6589f8c55-wzzjd" podUID="b735763f-2a7c-4c9a-9b44-d3680f2a86f5" Jan 30 13:47:16.823618 containerd[1460]: time="2025-01-30T13:47:16.823565861Z" level=error msg="Failed to destroy network for sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.823961 containerd[1460]: time="2025-01-30T13:47:16.823933623Z" level=error msg="encountered an error cleaning up failed sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.824031 containerd[1460]: time="2025-01-30T13:47:16.824007292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wtmtq,Uid:84baafcc-c8f4-413e-80cf-ae1a5f5e4140,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.824258 kubelet[2499]: E0130 13:47:16.824221 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.824310 kubelet[2499]: E0130 13:47:16.824277 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wtmtq" Jan 30 13:47:16.824310 kubelet[2499]: E0130 13:47:16.824299 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wtmtq" Jan 30 13:47:16.824384 kubelet[2499]: E0130 13:47:16.824335 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wtmtq_kube-system(84baafcc-c8f4-413e-80cf-ae1a5f5e4140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wtmtq_kube-system(84baafcc-c8f4-413e-80cf-ae1a5f5e4140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wtmtq" podUID="84baafcc-c8f4-413e-80cf-ae1a5f5e4140" Jan 30 13:47:16.832105 containerd[1460]: time="2025-01-30T13:47:16.832047014Z" level=error msg="Failed to destroy network for sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.832497 containerd[1460]: time="2025-01-30T13:47:16.832465973Z" level=error msg="encountered an error cleaning up failed sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.832566 containerd[1460]: time="2025-01-30T13:47:16.832522940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6589f8c55-lv4xg,Uid:313a082c-52c3-48c2-8128-4216401a9378,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.832727 kubelet[2499]: E0130 13:47:16.832697 2499 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:16.832761 kubelet[2499]: E0130 13:47:16.832740 2499 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6589f8c55-lv4xg" Jan 30 13:47:16.832784 kubelet[2499]: E0130 13:47:16.832758 2499 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6589f8c55-lv4xg" Jan 30 13:47:16.832807 kubelet[2499]: E0130 13:47:16.832790 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6589f8c55-lv4xg_calico-apiserver(313a082c-52c3-48c2-8128-4216401a9378)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6589f8c55-lv4xg_calico-apiserver(313a082c-52c3-48c2-8128-4216401a9378)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6589f8c55-lv4xg" podUID="313a082c-52c3-48c2-8128-4216401a9378" Jan 30 13:47:16.879552 systemd[1]: Started sshd@7-10.0.0.69:22-10.0.0.1:33936.service - OpenSSH per-connection server daemon (10.0.0.1:33936). Jan 30 13:47:16.919733 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 33936 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:16.921170 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:16.925156 systemd-logind[1449]: New session 8 of user core. Jan 30 13:47:16.935173 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:47:17.054306 sshd[3567]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:17.057060 kubelet[2499]: I0130 13:47:17.056901 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:17.058222 containerd[1460]: time="2025-01-30T13:47:17.057677323Z" level=info msg="StopPodSandbox for \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\"" Jan 30 13:47:17.058222 containerd[1460]: time="2025-01-30T13:47:17.057879012Z" level=info msg="Ensure that sandbox fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a in task-service has been cleanup successfully" Jan 30 13:47:17.058190 systemd[1]: sshd@7-10.0.0.69:22-10.0.0.1:33936.service: Deactivated successfully. Jan 30 13:47:17.058510 kubelet[2499]: I0130 13:47:17.057760 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:17.058790 containerd[1460]: time="2025-01-30T13:47:17.058739600Z" level=info msg="StopPodSandbox for \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\"" Jan 30 13:47:17.059177 containerd[1460]: time="2025-01-30T13:47:17.059035718Z" level=info msg="Ensure that sandbox 37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81 in task-service has been cleanup successfully" Jan 30 13:47:17.061424 kubelet[2499]: I0130 13:47:17.061116 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:17.062588 containerd[1460]: time="2025-01-30T13:47:17.062436764Z" level=info msg="StopPodSandbox for \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\"" Jan 30 13:47:17.063439 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:47:17.064041 containerd[1460]: time="2025-01-30T13:47:17.064008960Z" level=info msg="Ensure that sandbox c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4 in task-service has been cleanup successfully" Jan 30 13:47:17.064935 kubelet[2499]: I0130 13:47:17.064883 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:17.065776 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:47:17.066013 containerd[1460]: time="2025-01-30T13:47:17.065954971Z" level=info msg="StopPodSandbox for \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\"" Jan 30 13:47:17.066184 containerd[1460]: time="2025-01-30T13:47:17.066153013Z" level=info msg="Ensure that sandbox d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456 in task-service has been cleanup successfully" Jan 30 13:47:17.068026 systemd-logind[1449]: Removed session 8. Jan 30 13:47:17.070271 kubelet[2499]: E0130 13:47:17.070244 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:17.076515 kubelet[2499]: I0130 13:47:17.076485 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:17.077409 containerd[1460]: time="2025-01-30T13:47:17.076982490Z" level=info msg="StopPodSandbox for \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\"" Jan 30 13:47:17.077409 containerd[1460]: time="2025-01-30T13:47:17.077183759Z" level=info msg="Ensure that sandbox 80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac in task-service has been cleanup successfully" Jan 30 13:47:17.078938 containerd[1460]: time="2025-01-30T13:47:17.078919112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:47:17.084979 kubelet[2499]: I0130 13:47:17.084922 2499 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:17.085607 containerd[1460]: time="2025-01-30T13:47:17.085549581Z" level=info msg="StopPodSandbox for \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\"" Jan 30 13:47:17.085784 containerd[1460]: time="2025-01-30T13:47:17.085751311Z" level=info msg="Ensure that sandbox e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8 in task-service has been cleanup successfully" Jan 30 13:47:17.116928 containerd[1460]: time="2025-01-30T13:47:17.116883691Z" level=error msg="StopPodSandbox for \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\" failed" error="failed to destroy network for sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:17.117301 kubelet[2499]: E0130 13:47:17.117264 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:17.117530 kubelet[2499]: E0130 13:47:17.117487 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81"} Jan 30 13:47:17.118257 kubelet[2499]: E0130 13:47:17.118195 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77b64aea-0119-4203-b1a4-d349995e60a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:47:17.118257 kubelet[2499]: E0130 13:47:17.118225 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77b64aea-0119-4203-b1a4-d349995e60a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dcb74df-bvbvk" podUID="77b64aea-0119-4203-b1a4-d349995e60a1" Jan 30 13:47:17.138472 containerd[1460]: time="2025-01-30T13:47:17.137654164Z" level=error msg="StopPodSandbox for \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\" failed" error="failed to destroy network for sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:17.139087 kubelet[2499]: E0130 13:47:17.138846 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:17.139468 kubelet[2499]: E0130 13:47:17.139443 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8"} Jan 30 13:47:17.139602 kubelet[2499]: E0130 13:47:17.139525 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84baafcc-c8f4-413e-80cf-ae1a5f5e4140\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:47:17.139602 kubelet[2499]: E0130 13:47:17.139553 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84baafcc-c8f4-413e-80cf-ae1a5f5e4140\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wtmtq" podUID="84baafcc-c8f4-413e-80cf-ae1a5f5e4140" Jan 30 13:47:17.140679 containerd[1460]: time="2025-01-30T13:47:17.140616024Z" level=error msg="StopPodSandbox for \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\" failed" error="failed to destroy network for sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:17.140813 containerd[1460]: time="2025-01-30T13:47:17.140759174Z" level=error msg="StopPodSandbox for \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\" failed" error="failed to destroy network for sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:17.140958 kubelet[2499]: E0130 13:47:17.140895 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:17.141120 kubelet[2499]: E0130 13:47:17.140937 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:17.141120 kubelet[2499]: E0130 13:47:17.140982 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac"} Jan 30 13:47:17.141120 kubelet[2499]: E0130 13:47:17.141061 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456"} Jan 30 13:47:17.141120 kubelet[2499]: E0130 13:47:17.141084 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3feaebfa-27ef-455c-82db-977542f57659\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:47:17.141120 kubelet[2499]: E0130 13:47:17.141101 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3feaebfa-27ef-455c-82db-977542f57659\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t9h8r" podUID="3feaebfa-27ef-455c-82db-977542f57659" Jan 30 13:47:17.141276 kubelet[2499]: E0130 13:47:17.141108 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b735763f-2a7c-4c9a-9b44-d3680f2a86f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:47:17.141507 kubelet[2499]: E0130 13:47:17.141467 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b735763f-2a7c-4c9a-9b44-d3680f2a86f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6589f8c55-wzzjd" podUID="b735763f-2a7c-4c9a-9b44-d3680f2a86f5" Jan 30 13:47:17.147143 containerd[1460]: time="2025-01-30T13:47:17.147090360Z" level=error msg="StopPodSandbox for \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\" failed" error="failed to destroy network for sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:17.148020 kubelet[2499]: E0130 13:47:17.147896 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:17.148020 kubelet[2499]: E0130 13:47:17.147928 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a"} Jan 30 13:47:17.148020 kubelet[2499]: E0130 13:47:17.147952 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"313a082c-52c3-48c2-8128-4216401a9378\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:47:17.148020 kubelet[2499]: E0130 13:47:17.147980 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"313a082c-52c3-48c2-8128-4216401a9378\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6589f8c55-lv4xg" podUID="313a082c-52c3-48c2-8128-4216401a9378" Jan 30 13:47:17.150107 containerd[1460]: time="2025-01-30T13:47:17.149944017Z" level=error msg="StopPodSandbox for \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\" failed" error="failed to destroy network for sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:47:17.150294 kubelet[2499]: E0130 13:47:17.150266 2499 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:17.150357 kubelet[2499]: E0130 13:47:17.150294 2499 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4"} Jan 30 13:47:17.150357 kubelet[2499]: E0130 13:47:17.150315 2499 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f34dd70-d328-492f-8c87-2756a28b76b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:47:17.150357 kubelet[2499]: E0130 13:47:17.150330 2499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f34dd70-d328-492f-8c87-2756a28b76b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x2bcp" podUID="7f34dd70-d328-492f-8c87-2756a28b76b5" Jan 30 13:47:17.709917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a-shm.mount: Deactivated successfully. Jan 30 13:47:17.710058 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8-shm.mount: Deactivated successfully. Jan 30 13:47:17.710135 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac-shm.mount: Deactivated successfully. Jan 30 13:47:17.710214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81-shm.mount: Deactivated successfully. Jan 30 13:47:17.710286 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4-shm.mount: Deactivated successfully. Jan 30 13:47:22.064574 systemd[1]: Started sshd@8-10.0.0.69:22-10.0.0.1:41970.service - OpenSSH per-connection server daemon (10.0.0.1:41970). Jan 30 13:47:22.357931 sshd[3724]: Accepted publickey for core from 10.0.0.1 port 41970 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:22.359222 sshd[3724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:22.363278 systemd-logind[1449]: New session 9 of user core. Jan 30 13:47:22.374136 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:47:22.511489 sshd[3724]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:22.515018 systemd[1]: sshd@8-10.0.0.69:22-10.0.0.1:41970.service: Deactivated successfully. Jan 30 13:47:22.517015 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:47:22.517677 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:47:22.518827 systemd-logind[1449]: Removed session 9. Jan 30 13:47:24.338954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703320935.mount: Deactivated successfully. Jan 30 13:47:24.572361 kubelet[2499]: I0130 13:47:24.572316 2499 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:47:24.572957 kubelet[2499]: E0130 13:47:24.572599 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:25.104951 kubelet[2499]: E0130 13:47:25.104921 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:25.289509 containerd[1460]: time="2025-01-30T13:47:25.289346689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:25.302142 containerd[1460]: time="2025-01-30T13:47:25.302072666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:47:25.304420 containerd[1460]: time="2025-01-30T13:47:25.304378978Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:25.311105 containerd[1460]: time="2025-01-30T13:47:25.311055239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:25.314005 containerd[1460]: time="2025-01-30T13:47:25.312204779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.23309591s" Jan 30 13:47:25.314005 containerd[1460]: time="2025-01-30T13:47:25.312234855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:47:25.328575 containerd[1460]: time="2025-01-30T13:47:25.328519417Z" level=info msg="CreateContainer within sandbox \"ff06372edf47effad598b27a36c269242615ef2049c8136ffd1286843f12ad38\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:47:25.382963 containerd[1460]: time="2025-01-30T13:47:25.382844548Z" level=info msg="CreateContainer within sandbox \"ff06372edf47effad598b27a36c269242615ef2049c8136ffd1286843f12ad38\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7568ff59db4177f551d16002b6da0d8f2581fee30585287a971376921baa4a84\"" Jan 30 13:47:25.383454 containerd[1460]: time="2025-01-30T13:47:25.383403317Z" level=info msg="StartContainer for \"7568ff59db4177f551d16002b6da0d8f2581fee30585287a971376921baa4a84\"" Jan 30 13:47:25.458165 systemd[1]: Started cri-containerd-7568ff59db4177f551d16002b6da0d8f2581fee30585287a971376921baa4a84.scope - libcontainer container 7568ff59db4177f551d16002b6da0d8f2581fee30585287a971376921baa4a84. Jan 30 13:47:25.670866 containerd[1460]: time="2025-01-30T13:47:25.670421520Z" level=info msg="StartContainer for \"7568ff59db4177f551d16002b6da0d8f2581fee30585287a971376921baa4a84\" returns successfully" Jan 30 13:47:25.696302 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:47:25.696417 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:47:26.107734 kubelet[2499]: E0130 13:47:26.107657 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:26.119117 kubelet[2499]: I0130 13:47:26.119054 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6rls9" podStartSLOduration=5.411794187 podStartE2EDuration="30.119035164s" podCreationTimestamp="2025-01-30 13:46:56 +0000 UTC" firstStartedPulling="2025-01-30 13:47:00.607825478 +0000 UTC m=+16.102747059" lastFinishedPulling="2025-01-30 13:47:25.315066455 +0000 UTC m=+40.809988036" observedRunningTime="2025-01-30 13:47:26.118868782 +0000 UTC m=+41.613790383" watchObservedRunningTime="2025-01-30 13:47:26.119035164 +0000 UTC m=+41.613956745" Jan 30 13:47:26.379831 systemd[1]: run-containerd-runc-k8s.io-7568ff59db4177f551d16002b6da0d8f2581fee30585287a971376921baa4a84-runc.dxLgLo.mount: Deactivated successfully. Jan 30 13:47:27.082168 kernel: bpftool[3958]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:47:27.110505 kubelet[2499]: E0130 13:47:27.110383 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:27.331142 systemd-networkd[1402]: vxlan.calico: Link UP Jan 30 13:47:27.331155 systemd-networkd[1402]: vxlan.calico: Gained carrier Jan 30 13:47:27.525033 systemd[1]: Started sshd@9-10.0.0.69:22-10.0.0.1:56676.service - OpenSSH per-connection server daemon (10.0.0.1:56676). Jan 30 13:47:27.569790 sshd[4023]: Accepted publickey for core from 10.0.0.1 port 56676 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:27.571746 sshd[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:27.576259 systemd-logind[1449]: New session 10 of user core. Jan 30 13:47:27.581199 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:47:27.702705 sshd[4023]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:27.706748 systemd[1]: sshd@9-10.0.0.69:22-10.0.0.1:56676.service: Deactivated successfully. Jan 30 13:47:27.708659 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:47:27.709325 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:47:27.710286 systemd-logind[1449]: Removed session 10. Jan 30 13:47:28.591620 containerd[1460]: time="2025-01-30T13:47:28.591546087Z" level=info msg="StopPodSandbox for \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\"" Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.673 [INFO][4086] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.673 [INFO][4086] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" iface="eth0" netns="/var/run/netns/cni-21e43fff-9480-74a4-5afb-b570de6e9f88" Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.673 [INFO][4086] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" iface="eth0" netns="/var/run/netns/cni-21e43fff-9480-74a4-5afb-b570de6e9f88" Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.674 [INFO][4086] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" iface="eth0" netns="/var/run/netns/cni-21e43fff-9480-74a4-5afb-b570de6e9f88" Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.674 [INFO][4086] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.674 [INFO][4086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.750 [INFO][4093] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" HandleID="k8s-pod-network.d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.751 [INFO][4093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.751 [INFO][4093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.820 [WARNING][4093] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" HandleID="k8s-pod-network.d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.820 [INFO][4093] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" HandleID="k8s-pod-network.d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.822 [INFO][4093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:28.827075 containerd[1460]: 2025-01-30 13:47:28.824 [INFO][4086] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:28.828132 containerd[1460]: time="2025-01-30T13:47:28.828096281Z" level=info msg="TearDown network for sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\" successfully" Jan 30 13:47:28.828196 containerd[1460]: time="2025-01-30T13:47:28.828132810Z" level=info msg="StopPodSandbox for \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\" returns successfully" Jan 30 13:47:28.829949 containerd[1460]: time="2025-01-30T13:47:28.829909507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t9h8r,Uid:3feaebfa-27ef-455c-82db-977542f57659,Namespace:calico-system,Attempt:1,}" Jan 30 13:47:28.830131 systemd[1]: run-netns-cni\x2d21e43fff\x2d9480\x2d74a4\x2d5afb\x2db570de6e9f88.mount: Deactivated successfully. Jan 30 13:47:29.330189 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Jan 30 13:47:29.557213 systemd-networkd[1402]: cali937ff0a1d68: Link UP Jan 30 13:47:29.557429 systemd-networkd[1402]: cali937ff0a1d68: Gained carrier Jan 30 13:47:29.591460 containerd[1460]: time="2025-01-30T13:47:29.591329588Z" level=info msg="StopPodSandbox for \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\"" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.285 [INFO][4103] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t9h8r-eth0 csi-node-driver- calico-system 3feaebfa-27ef-455c-82db-977542f57659 855 0 2025-01-30 13:46:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t9h8r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali937ff0a1d68 [] []}} ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Namespace="calico-system" Pod="csi-node-driver-t9h8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t9h8r-" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.285 [INFO][4103] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Namespace="calico-system" Pod="csi-node-driver-t9h8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.313 [INFO][4116] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" HandleID="k8s-pod-network.d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.320 [INFO][4116] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" HandleID="k8s-pod-network.d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002961b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t9h8r", "timestamp":"2025-01-30 13:47:29.313224248 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.320 [INFO][4116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.320 [INFO][4116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.320 [INFO][4116] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.322 [INFO][4116] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" host="localhost" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.326 [INFO][4116] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.329 [INFO][4116] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.331 [INFO][4116] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.333 [INFO][4116] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.333 [INFO][4116] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" host="localhost" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.334 [INFO][4116] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17 Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.424 [INFO][4116] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" host="localhost" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.551 [INFO][4116] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" host="localhost" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.551 [INFO][4116] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" host="localhost" Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.551 [INFO][4116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:29.617527 containerd[1460]: 2025-01-30 13:47:29.551 [INFO][4116] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" HandleID="k8s-pod-network.d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:29.618418 containerd[1460]: 2025-01-30 13:47:29.554 [INFO][4103] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Namespace="calico-system" Pod="csi-node-driver-t9h8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t9h8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t9h8r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3feaebfa-27ef-455c-82db-977542f57659", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t9h8r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali937ff0a1d68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:29.618418 containerd[1460]: 2025-01-30 13:47:29.554 [INFO][4103] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Namespace="calico-system" Pod="csi-node-driver-t9h8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:29.618418 containerd[1460]: 2025-01-30 13:47:29.554 [INFO][4103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali937ff0a1d68 ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Namespace="calico-system" Pod="csi-node-driver-t9h8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:29.618418 containerd[1460]: 2025-01-30 13:47:29.557 [INFO][4103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Namespace="calico-system" Pod="csi-node-driver-t9h8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:29.618418 containerd[1460]: 2025-01-30 13:47:29.557 [INFO][4103] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Namespace="calico-system" Pod="csi-node-driver-t9h8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t9h8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t9h8r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3feaebfa-27ef-455c-82db-977542f57659", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17", Pod:"csi-node-driver-t9h8r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali937ff0a1d68", MAC:"7a:0a:a7:c7:f1:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:29.618418 containerd[1460]: 2025-01-30 13:47:29.612 [INFO][4103] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17" Namespace="calico-system" Pod="csi-node-driver-t9h8r" WorkloadEndpoint="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:29.663840 containerd[1460]: time="2025-01-30T13:47:29.662202103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:29.663840 containerd[1460]: time="2025-01-30T13:47:29.662255022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:29.663840 containerd[1460]: time="2025-01-30T13:47:29.662305627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:29.663840 containerd[1460]: time="2025-01-30T13:47:29.662425221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:29.686221 systemd[1]: Started cri-containerd-d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17.scope - libcontainer container d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17. Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.652 [INFO][4141] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.653 [INFO][4141] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" iface="eth0" netns="/var/run/netns/cni-7c3d535b-dc86-a7ca-ae3e-bc4709f029bc" Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.653 [INFO][4141] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" iface="eth0" netns="/var/run/netns/cni-7c3d535b-dc86-a7ca-ae3e-bc4709f029bc" Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.653 [INFO][4141] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" iface="eth0" netns="/var/run/netns/cni-7c3d535b-dc86-a7ca-ae3e-bc4709f029bc" Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.653 [INFO][4141] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.653 [INFO][4141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.681 [INFO][4169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" HandleID="k8s-pod-network.37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.681 [INFO][4169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.681 [INFO][4169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.687 [WARNING][4169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" HandleID="k8s-pod-network.37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.687 [INFO][4169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" HandleID="k8s-pod-network.37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.688 [INFO][4169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:29.696758 containerd[1460]: 2025-01-30 13:47:29.693 [INFO][4141] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:29.697567 containerd[1460]: time="2025-01-30T13:47:29.697250326Z" level=info msg="TearDown network for sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\" successfully" Jan 30 13:47:29.697567 containerd[1460]: time="2025-01-30T13:47:29.697302104Z" level=info msg="StopPodSandbox for \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\" returns successfully" Jan 30 13:47:29.698532 containerd[1460]: time="2025-01-30T13:47:29.698495705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dcb74df-bvbvk,Uid:77b64aea-0119-4203-b1a4-d349995e60a1,Namespace:calico-system,Attempt:1,}" Jan 30 13:47:29.700590 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:47:29.713935 containerd[1460]: time="2025-01-30T13:47:29.713887398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t9h8r,Uid:3feaebfa-27ef-455c-82db-977542f57659,Namespace:calico-system,Attempt:1,} returns sandbox id \"d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17\"" Jan 30 13:47:29.715805 containerd[1460]: time="2025-01-30T13:47:29.715745136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:47:29.831009 systemd[1]: run-netns-cni\x2d7c3d535b\x2ddc86\x2da7ca\x2dae3e\x2dbc4709f029bc.mount: Deactivated successfully. Jan 30 13:47:30.168717 systemd-networkd[1402]: cali8d98e84fc8b: Link UP Jan 30 13:47:30.169273 systemd-networkd[1402]: cali8d98e84fc8b: Gained carrier Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:29.792 [INFO][4208] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0 calico-kube-controllers-dcb74df- calico-system 77b64aea-0119-4203-b1a4-d349995e60a1 865 0 2025-01-30 13:46:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dcb74df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-dcb74df-bvbvk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8d98e84fc8b [] []}} ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Namespace="calico-system" Pod="calico-kube-controllers-dcb74df-bvbvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:29.792 [INFO][4208] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Namespace="calico-system" Pod="calico-kube-controllers-dcb74df-bvbvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:29.823 [INFO][4222] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" HandleID="k8s-pod-network.ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:29.885 [INFO][4222] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" HandleID="k8s-pod-network.ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-dcb74df-bvbvk", "timestamp":"2025-01-30 13:47:29.823799051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:29.885 [INFO][4222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:29.885 [INFO][4222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:29.885 [INFO][4222] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:29.888 [INFO][4222] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" host="localhost" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.127 [INFO][4222] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.132 [INFO][4222] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.133 [INFO][4222] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.151 [INFO][4222] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.151 [INFO][4222] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" host="localhost" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.152 [INFO][4222] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.157 [INFO][4222] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" host="localhost" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.162 [INFO][4222] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" host="localhost" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.162 [INFO][4222] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" host="localhost" Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.162 [INFO][4222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:30.185513 containerd[1460]: 2025-01-30 13:47:30.162 [INFO][4222] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" HandleID="k8s-pod-network.ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:30.186164 containerd[1460]: 2025-01-30 13:47:30.166 [INFO][4208] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Namespace="calico-system" Pod="calico-kube-controllers-dcb74df-bvbvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0", GenerateName:"calico-kube-controllers-dcb74df-", Namespace:"calico-system", SelfLink:"", UID:"77b64aea-0119-4203-b1a4-d349995e60a1", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dcb74df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-dcb74df-bvbvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d98e84fc8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:30.186164 containerd[1460]: 2025-01-30 13:47:30.166 [INFO][4208] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Namespace="calico-system" Pod="calico-kube-controllers-dcb74df-bvbvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:30.186164 containerd[1460]: 2025-01-30 13:47:30.166 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d98e84fc8b ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Namespace="calico-system" Pod="calico-kube-controllers-dcb74df-bvbvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:30.186164 containerd[1460]: 2025-01-30 13:47:30.169 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Namespace="calico-system" Pod="calico-kube-controllers-dcb74df-bvbvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:30.186164 containerd[1460]: 2025-01-30 13:47:30.170 [INFO][4208] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Namespace="calico-system" Pod="calico-kube-controllers-dcb74df-bvbvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0", GenerateName:"calico-kube-controllers-dcb74df-", Namespace:"calico-system", SelfLink:"", UID:"77b64aea-0119-4203-b1a4-d349995e60a1", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dcb74df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e", Pod:"calico-kube-controllers-dcb74df-bvbvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d98e84fc8b", MAC:"4a:94:ae:25:c8:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:30.186164 containerd[1460]: 2025-01-30 13:47:30.181 [INFO][4208] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e" Namespace="calico-system" Pod="calico-kube-controllers-dcb74df-bvbvk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:30.210699 containerd[1460]: time="2025-01-30T13:47:30.210591641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:30.210699 containerd[1460]: time="2025-01-30T13:47:30.210655632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:30.210699 containerd[1460]: time="2025-01-30T13:47:30.210674929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:30.210983 containerd[1460]: time="2025-01-30T13:47:30.210770337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:30.227116 systemd[1]: run-containerd-runc-k8s.io-ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e-runc.BhGYDt.mount: Deactivated successfully. Jan 30 13:47:30.237156 systemd[1]: Started cri-containerd-ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e.scope - libcontainer container ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e. Jan 30 13:47:30.250446 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:47:30.274969 containerd[1460]: time="2025-01-30T13:47:30.274882759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dcb74df-bvbvk,Uid:77b64aea-0119-4203-b1a4-d349995e60a1,Namespace:calico-system,Attempt:1,} returns sandbox id \"ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e\"" Jan 30 13:47:31.092464 containerd[1460]: time="2025-01-30T13:47:31.092400002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:31.093307 containerd[1460]: time="2025-01-30T13:47:31.093268433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:47:31.094788 containerd[1460]: time="2025-01-30T13:47:31.094723023Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:31.096868 containerd[1460]: time="2025-01-30T13:47:31.096837033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:31.097490 containerd[1460]: time="2025-01-30T13:47:31.097444894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.38166379s" Jan 30 13:47:31.097524 containerd[1460]: time="2025-01-30T13:47:31.097489317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:47:31.099090 containerd[1460]: time="2025-01-30T13:47:31.099062001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:47:31.099841 containerd[1460]: time="2025-01-30T13:47:31.099784066Z" level=info msg="CreateContainer within sandbox \"d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:47:31.133005 containerd[1460]: time="2025-01-30T13:47:31.132946098Z" level=info msg="CreateContainer within sandbox \"d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e40d89538972e4609b5f2553f1dfc8ffd56959706dbddb5dbb51252baeaca137\"" Jan 30 13:47:31.133634 containerd[1460]: time="2025-01-30T13:47:31.133419657Z" level=info msg="StartContainer for \"e40d89538972e4609b5f2553f1dfc8ffd56959706dbddb5dbb51252baeaca137\"" Jan 30 13:47:31.165121 systemd[1]: Started cri-containerd-e40d89538972e4609b5f2553f1dfc8ffd56959706dbddb5dbb51252baeaca137.scope - libcontainer container e40d89538972e4609b5f2553f1dfc8ffd56959706dbddb5dbb51252baeaca137. Jan 30 13:47:31.193024 containerd[1460]: time="2025-01-30T13:47:31.192958713Z" level=info msg="StartContainer for \"e40d89538972e4609b5f2553f1dfc8ffd56959706dbddb5dbb51252baeaca137\" returns successfully" Jan 30 13:47:31.313237 systemd-networkd[1402]: cali8d98e84fc8b: Gained IPv6LL Jan 30 13:47:31.569191 systemd-networkd[1402]: cali937ff0a1d68: Gained IPv6LL Jan 30 13:47:31.591192 containerd[1460]: time="2025-01-30T13:47:31.591149742Z" level=info msg="StopPodSandbox for \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\"" Jan 30 13:47:31.592619 containerd[1460]: time="2025-01-30T13:47:31.592585367Z" level=info msg="StopPodSandbox for \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\"" Jan 30 13:47:31.593488 containerd[1460]: time="2025-01-30T13:47:31.593371894Z" level=info msg="StopPodSandbox for \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\"" Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.638 [INFO][4359] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.639 [INFO][4359] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" iface="eth0" netns="/var/run/netns/cni-db8b8272-0149-0f2e-f24c-c73a5931295d" Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.639 [INFO][4359] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" iface="eth0" netns="/var/run/netns/cni-db8b8272-0149-0f2e-f24c-c73a5931295d" Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.639 [INFO][4359] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" iface="eth0" netns="/var/run/netns/cni-db8b8272-0149-0f2e-f24c-c73a5931295d" Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.639 [INFO][4359] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.639 [INFO][4359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.668 [INFO][4390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" HandleID="k8s-pod-network.c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.668 [INFO][4390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.668 [INFO][4390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.673 [WARNING][4390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" HandleID="k8s-pod-network.c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.673 [INFO][4390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" HandleID="k8s-pod-network.c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.676 [INFO][4390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:31.680639 containerd[1460]: 2025-01-30 13:47:31.678 [INFO][4359] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:31.681319 containerd[1460]: time="2025-01-30T13:47:31.681290643Z" level=info msg="TearDown network for sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\" successfully" Jan 30 13:47:31.681377 containerd[1460]: time="2025-01-30T13:47:31.681365142Z" level=info msg="StopPodSandbox for \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\" returns successfully" Jan 30 13:47:31.682669 kubelet[2499]: E0130 13:47:31.682601 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:31.683564 containerd[1460]: time="2025-01-30T13:47:31.683403750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x2bcp,Uid:7f34dd70-d328-492f-8c87-2756a28b76b5,Namespace:kube-system,Attempt:1,}" Jan 30 13:47:31.684979 systemd[1]: run-netns-cni\x2ddb8b8272\x2d0149\x2d0f2e\x2df24c\x2dc73a5931295d.mount: Deactivated successfully. Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.644 [INFO][4373] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.644 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" iface="eth0" netns="/var/run/netns/cni-a276c0bf-371d-a6e7-f8ba-14d7585a304a" Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.645 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" iface="eth0" netns="/var/run/netns/cni-a276c0bf-371d-a6e7-f8ba-14d7585a304a" Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.645 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" iface="eth0" netns="/var/run/netns/cni-a276c0bf-371d-a6e7-f8ba-14d7585a304a" Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.646 [INFO][4373] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.646 [INFO][4373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.678 [INFO][4398] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" HandleID="k8s-pod-network.e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.678 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.678 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.685 [WARNING][4398] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" HandleID="k8s-pod-network.e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.685 [INFO][4398] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" HandleID="k8s-pod-network.e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.688 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:31.693275 containerd[1460]: 2025-01-30 13:47:31.691 [INFO][4373] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:31.693662 containerd[1460]: time="2025-01-30T13:47:31.693580527Z" level=info msg="TearDown network for sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\" successfully" Jan 30 13:47:31.693662 containerd[1460]: time="2025-01-30T13:47:31.693607477Z" level=info msg="StopPodSandbox for \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\" returns successfully" Jan 30 13:47:31.694051 kubelet[2499]: E0130 13:47:31.694016 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:31.694902 containerd[1460]: time="2025-01-30T13:47:31.694874707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wtmtq,Uid:84baafcc-c8f4-413e-80cf-ae1a5f5e4140,Namespace:kube-system,Attempt:1,}" Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.651 [INFO][4374] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.651 [INFO][4374] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" iface="eth0" netns="/var/run/netns/cni-700c4274-6e63-e2b2-57fb-52ab6944e987" Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.651 [INFO][4374] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" iface="eth0" netns="/var/run/netns/cni-700c4274-6e63-e2b2-57fb-52ab6944e987" Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.652 [INFO][4374] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" iface="eth0" netns="/var/run/netns/cni-700c4274-6e63-e2b2-57fb-52ab6944e987" Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.652 [INFO][4374] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.652 [INFO][4374] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.687 [INFO][4403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" HandleID="k8s-pod-network.80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.687 [INFO][4403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.688 [INFO][4403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.693 [WARNING][4403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" HandleID="k8s-pod-network.80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.693 [INFO][4403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" HandleID="k8s-pod-network.80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.695 [INFO][4403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:31.700894 containerd[1460]: 2025-01-30 13:47:31.698 [INFO][4374] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:31.701246 containerd[1460]: time="2025-01-30T13:47:31.701134679Z" level=info msg="TearDown network for sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\" successfully" Jan 30 13:47:31.701246 containerd[1460]: time="2025-01-30T13:47:31.701161680Z" level=info msg="StopPodSandbox for \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\" returns successfully" Jan 30 13:47:31.701860 containerd[1460]: time="2025-01-30T13:47:31.701679703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6589f8c55-wzzjd,Uid:b735763f-2a7c-4c9a-9b44-d3680f2a86f5,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:47:31.833663 systemd-networkd[1402]: cali0cd220bb0ff: Link UP Jan 30 13:47:31.835734 systemd-networkd[1402]: cali0cd220bb0ff: Gained carrier Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.732 [INFO][4415] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0 coredns-668d6bf9bc- kube-system 7f34dd70-d328-492f-8c87-2756a28b76b5 889 0 2025-01-30 13:46:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-x2bcp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0cd220bb0ff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Namespace="kube-system" Pod="coredns-668d6bf9bc-x2bcp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x2bcp-" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.732 [INFO][4415] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Namespace="kube-system" Pod="coredns-668d6bf9bc-x2bcp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.775 [INFO][4440] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" HandleID="k8s-pod-network.839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.784 [INFO][4440] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" HandleID="k8s-pod-network.839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019ceb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-x2bcp", "timestamp":"2025-01-30 13:47:31.775205899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.784 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.784 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.784 [INFO][4440] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.787 [INFO][4440] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" host="localhost" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.794 [INFO][4440] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.799 [INFO][4440] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.801 [INFO][4440] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.804 [INFO][4440] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.805 [INFO][4440] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" host="localhost" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.807 [INFO][4440] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321 Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.812 [INFO][4440] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" host="localhost" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.820 [INFO][4440] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" host="localhost" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.820 [INFO][4440] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" host="localhost" Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.820 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:31.852468 containerd[1460]: 2025-01-30 13:47:31.820 [INFO][4440] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" HandleID="k8s-pod-network.839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:31.853946 containerd[1460]: 2025-01-30 13:47:31.829 [INFO][4415] cni-plugin/k8s.go 386: Populated endpoint ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Namespace="kube-system" Pod="coredns-668d6bf9bc-x2bcp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7f34dd70-d328-492f-8c87-2756a28b76b5", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-x2bcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cd220bb0ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:31.853946 containerd[1460]: 2025-01-30 13:47:31.830 [INFO][4415] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Namespace="kube-system" Pod="coredns-668d6bf9bc-x2bcp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:31.853946 containerd[1460]: 2025-01-30 13:47:31.830 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cd220bb0ff ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Namespace="kube-system" Pod="coredns-668d6bf9bc-x2bcp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:31.853946 containerd[1460]: 2025-01-30 13:47:31.837 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Namespace="kube-system" Pod="coredns-668d6bf9bc-x2bcp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:31.853946 containerd[1460]: 2025-01-30 13:47:31.838 [INFO][4415] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Namespace="kube-system" Pod="coredns-668d6bf9bc-x2bcp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7f34dd70-d328-492f-8c87-2756a28b76b5", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321", Pod:"coredns-668d6bf9bc-x2bcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cd220bb0ff", MAC:"be:b3:75:a0:37:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:31.853946 containerd[1460]: 2025-01-30 13:47:31.849 [INFO][4415] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321" Namespace="kube-system" Pod="coredns-668d6bf9bc-x2bcp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:31.877745 containerd[1460]: time="2025-01-30T13:47:31.877592372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:31.877745 containerd[1460]: time="2025-01-30T13:47:31.877698361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:31.877957 containerd[1460]: time="2025-01-30T13:47:31.877735580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:31.877957 containerd[1460]: time="2025-01-30T13:47:31.877846629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:31.903578 systemd[1]: Started cri-containerd-839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321.scope - libcontainer container 839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321. Jan 30 13:47:31.917250 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:47:31.927151 systemd-networkd[1402]: calia88ea086f72: Link UP Jan 30 13:47:31.928838 systemd-networkd[1402]: calia88ea086f72: Gained carrier Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.800 [INFO][4448] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0 calico-apiserver-6589f8c55- calico-apiserver b735763f-2a7c-4c9a-9b44-d3680f2a86f5 891 0 2025-01-30 13:46:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6589f8c55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6589f8c55-wzzjd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia88ea086f72 [] []}} ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-wzzjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.800 [INFO][4448] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-wzzjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.844 [INFO][4472] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" HandleID="k8s-pod-network.d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.886 [INFO][4472] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" HandleID="k8s-pod-network.d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00070a7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6589f8c55-wzzjd", "timestamp":"2025-01-30 13:47:31.844162807 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.886 [INFO][4472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.886 [INFO][4472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.886 [INFO][4472] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.888 [INFO][4472] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" host="localhost" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.892 [INFO][4472] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.900 [INFO][4472] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.901 [INFO][4472] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.903 [INFO][4472] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.903 [INFO][4472] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" host="localhost" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.905 [INFO][4472] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176 Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.910 [INFO][4472] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" host="localhost" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.918 [INFO][4472] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" host="localhost" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.918 [INFO][4472] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" host="localhost" Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.918 [INFO][4472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:31.944359 containerd[1460]: 2025-01-30 13:47:31.918 [INFO][4472] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" HandleID="k8s-pod-network.d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:31.945174 containerd[1460]: 2025-01-30 13:47:31.923 [INFO][4448] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-wzzjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0", GenerateName:"calico-apiserver-6589f8c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"b735763f-2a7c-4c9a-9b44-d3680f2a86f5", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6589f8c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6589f8c55-wzzjd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia88ea086f72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:31.945174 containerd[1460]: 2025-01-30 13:47:31.923 [INFO][4448] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-wzzjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:31.945174 containerd[1460]: 2025-01-30 13:47:31.923 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia88ea086f72 ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-wzzjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:31.945174 containerd[1460]: 2025-01-30 13:47:31.928 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-wzzjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:31.945174 containerd[1460]: 2025-01-30 13:47:31.929 [INFO][4448] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-wzzjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0", GenerateName:"calico-apiserver-6589f8c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"b735763f-2a7c-4c9a-9b44-d3680f2a86f5", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6589f8c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176", Pod:"calico-apiserver-6589f8c55-wzzjd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia88ea086f72", MAC:"4e:cd:cf:1f:2b:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:31.945174 containerd[1460]: 2025-01-30 13:47:31.939 [INFO][4448] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-wzzjd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:31.960343 containerd[1460]: time="2025-01-30T13:47:31.960162117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x2bcp,Uid:7f34dd70-d328-492f-8c87-2756a28b76b5,Namespace:kube-system,Attempt:1,} returns sandbox id \"839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321\"" Jan 30 13:47:31.963182 kubelet[2499]: E0130 13:47:31.963152 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:31.970886 containerd[1460]: time="2025-01-30T13:47:31.970835567Z" level=info msg="CreateContainer within sandbox \"839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:47:31.981889 containerd[1460]: time="2025-01-30T13:47:31.981577434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:31.981889 containerd[1460]: time="2025-01-30T13:47:31.981669647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:31.981889 containerd[1460]: time="2025-01-30T13:47:31.981684255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:31.981889 containerd[1460]: time="2025-01-30T13:47:31.981776148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:31.999557 containerd[1460]: time="2025-01-30T13:47:31.999464628Z" level=info msg="CreateContainer within sandbox \"839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c3053828972046e85814a9e89ee2241b064f1f6c5554eb0521fdca53872e1e1\"" Jan 30 13:47:32.001677 containerd[1460]: time="2025-01-30T13:47:32.000947382Z" level=info msg="StartContainer for \"3c3053828972046e85814a9e89ee2241b064f1f6c5554eb0521fdca53872e1e1\"" Jan 30 13:47:32.004508 systemd[1]: Started cri-containerd-d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176.scope - libcontainer container d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176. Jan 30 13:47:32.021338 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:47:32.039149 systemd[1]: Started cri-containerd-3c3053828972046e85814a9e89ee2241b064f1f6c5554eb0521fdca53872e1e1.scope - libcontainer container 3c3053828972046e85814a9e89ee2241b064f1f6c5554eb0521fdca53872e1e1. Jan 30 13:47:32.041142 systemd-networkd[1402]: calie196eb693ea: Link UP Jan 30 13:47:32.041413 systemd-networkd[1402]: calie196eb693ea: Gained carrier Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:31.760 [INFO][4429] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0 coredns-668d6bf9bc- kube-system 84baafcc-c8f4-413e-80cf-ae1a5f5e4140 890 0 2025-01-30 13:46:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wtmtq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie196eb693ea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtmtq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtmtq-" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:31.760 [INFO][4429] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtmtq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:31.810 [INFO][4463] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" HandleID="k8s-pod-network.bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:31.886 [INFO][4463] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" HandleID="k8s-pod-network.bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e05c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wtmtq", "timestamp":"2025-01-30 13:47:31.810414443 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:31.886 [INFO][4463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:31.918 [INFO][4463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:31.919 [INFO][4463] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:31.989 [INFO][4463] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" host="localhost" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:31.997 [INFO][4463] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:32.008 [INFO][4463] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:32.013 [INFO][4463] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:32.017 [INFO][4463] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:32.017 [INFO][4463] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" host="localhost" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:32.019 [INFO][4463] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:32.024 [INFO][4463] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" host="localhost" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:32.033 [INFO][4463] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" host="localhost" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:32.033 [INFO][4463] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" host="localhost" Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:32.033 [INFO][4463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:32.064138 containerd[1460]: 2025-01-30 13:47:32.033 [INFO][4463] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" HandleID="k8s-pod-network.bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:32.064771 containerd[1460]: 2025-01-30 13:47:32.037 [INFO][4429] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtmtq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"84baafcc-c8f4-413e-80cf-ae1a5f5e4140", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wtmtq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie196eb693ea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:32.064771 containerd[1460]: 2025-01-30 13:47:32.037 [INFO][4429] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtmtq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:32.064771 containerd[1460]: 2025-01-30 13:47:32.037 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie196eb693ea ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtmtq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:32.064771 containerd[1460]: 2025-01-30 13:47:32.041 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtmtq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:32.064771 containerd[1460]: 2025-01-30 13:47:32.042 [INFO][4429] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtmtq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"84baafcc-c8f4-413e-80cf-ae1a5f5e4140", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f", Pod:"coredns-668d6bf9bc-wtmtq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie196eb693ea", MAC:"ee:d8:06:d1:0f:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:32.064771 containerd[1460]: 2025-01-30 13:47:32.059 [INFO][4429] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtmtq" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:32.066229 containerd[1460]: time="2025-01-30T13:47:32.066109857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6589f8c55-wzzjd,Uid:b735763f-2a7c-4c9a-9b44-d3680f2a86f5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176\"" Jan 30 13:47:32.085545 containerd[1460]: time="2025-01-30T13:47:32.085387879Z" level=info msg="StartContainer for \"3c3053828972046e85814a9e89ee2241b064f1f6c5554eb0521fdca53872e1e1\" returns successfully" Jan 30 13:47:32.094704 containerd[1460]: time="2025-01-30T13:47:32.094594423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:32.094704 containerd[1460]: time="2025-01-30T13:47:32.094663473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:32.095119 containerd[1460]: time="2025-01-30T13:47:32.094692397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:32.095119 containerd[1460]: time="2025-01-30T13:47:32.094792194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:32.113143 systemd[1]: Started cri-containerd-bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f.scope - libcontainer container bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f. Jan 30 13:47:32.128190 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:47:32.131178 kubelet[2499]: E0130 13:47:32.130698 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:32.144323 systemd[1]: run-netns-cni\x2da276c0bf\x2d371d\x2da6e7\x2df8ba\x2d14d7585a304a.mount: Deactivated successfully. Jan 30 13:47:32.144432 systemd[1]: run-netns-cni\x2d700c4274\x2d6e63\x2de2b2\x2d57fb\x2d52ab6944e987.mount: Deactivated successfully. Jan 30 13:47:32.175448 containerd[1460]: time="2025-01-30T13:47:32.175401508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wtmtq,Uid:84baafcc-c8f4-413e-80cf-ae1a5f5e4140,Namespace:kube-system,Attempt:1,} returns sandbox id \"bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f\"" Jan 30 13:47:32.176311 kubelet[2499]: E0130 13:47:32.176253 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:32.178537 containerd[1460]: time="2025-01-30T13:47:32.178478625Z" level=info msg="CreateContainer within sandbox \"bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:47:32.592207 containerd[1460]: time="2025-01-30T13:47:32.591333400Z" level=info msg="StopPodSandbox for \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\"" Jan 30 13:47:32.673792 kubelet[2499]: I0130 13:47:32.673736 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-x2bcp" podStartSLOduration=42.673717956 podStartE2EDuration="42.673717956s" podCreationTimestamp="2025-01-30 13:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:47:32.152138284 +0000 UTC m=+47.647059865" watchObservedRunningTime="2025-01-30 13:47:32.673717956 +0000 UTC m=+48.168639537" Jan 30 13:47:32.725053 systemd[1]: Started sshd@10-10.0.0.69:22-10.0.0.1:56682.service - OpenSSH per-connection server daemon (10.0.0.1:56682). Jan 30 13:47:32.776020 sshd[4726]: Accepted publickey for core from 10.0.0.1 port 56682 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:32.777787 sshd[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:32.781681 systemd-logind[1449]: New session 11 of user core. Jan 30 13:47:32.790178 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.673 [INFO][4703] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.673 [INFO][4703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" iface="eth0" netns="/var/run/netns/cni-1b8499a3-2970-ed88-7f58-2be7e035c548" Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.674 [INFO][4703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" iface="eth0" netns="/var/run/netns/cni-1b8499a3-2970-ed88-7f58-2be7e035c548" Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.674 [INFO][4703] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" iface="eth0" netns="/var/run/netns/cni-1b8499a3-2970-ed88-7f58-2be7e035c548" Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.674 [INFO][4703] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.674 [INFO][4703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.710 [INFO][4716] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" HandleID="k8s-pod-network.fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.710 [INFO][4716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.710 [INFO][4716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.788 [WARNING][4716] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" HandleID="k8s-pod-network.fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.788 [INFO][4716] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" HandleID="k8s-pod-network.fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.790 [INFO][4716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:32.795513 containerd[1460]: 2025-01-30 13:47:32.793 [INFO][4703] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:32.795981 containerd[1460]: time="2025-01-30T13:47:32.795775205Z" level=info msg="TearDown network for sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\" successfully" Jan 30 13:47:32.795981 containerd[1460]: time="2025-01-30T13:47:32.795805181Z" level=info msg="StopPodSandbox for \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\" returns successfully" Jan 30 13:47:32.796586 containerd[1460]: time="2025-01-30T13:47:32.796400008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6589f8c55-lv4xg,Uid:313a082c-52c3-48c2-8128-4216401a9378,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:47:32.798445 systemd[1]: run-netns-cni\x2d1b8499a3\x2d2970\x2ded88\x2d7f58\x2d2be7e035c548.mount: Deactivated successfully. Jan 30 13:47:33.033022 sshd[4726]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:33.044566 systemd[1]: sshd@10-10.0.0.69:22-10.0.0.1:56682.service: Deactivated successfully. Jan 30 13:47:33.046884 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:47:33.048458 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:47:33.050457 containerd[1460]: time="2025-01-30T13:47:33.049363820Z" level=info msg="CreateContainer within sandbox \"bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1b0a71c43f2807f2e074b252ee160debcf2b665d39b7c72b6074d6569281aee\"" Jan 30 13:47:33.051058 containerd[1460]: time="2025-01-30T13:47:33.050878153Z" level=info msg="StartContainer for \"b1b0a71c43f2807f2e074b252ee160debcf2b665d39b7c72b6074d6569281aee\"" Jan 30 13:47:33.057728 systemd[1]: Started sshd@11-10.0.0.69:22-10.0.0.1:56690.service - OpenSSH per-connection server daemon (10.0.0.1:56690). Jan 30 13:47:33.073761 systemd-logind[1449]: Removed session 11. Jan 30 13:47:33.088191 systemd[1]: Started cri-containerd-b1b0a71c43f2807f2e074b252ee160debcf2b665d39b7c72b6074d6569281aee.scope - libcontainer container b1b0a71c43f2807f2e074b252ee160debcf2b665d39b7c72b6074d6569281aee. Jan 30 13:47:33.098017 sshd[4745]: Accepted publickey for core from 10.0.0.1 port 56690 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:33.098797 sshd[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:33.109278 systemd-logind[1449]: New session 12 of user core. Jan 30 13:47:33.114642 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:47:33.175022 containerd[1460]: time="2025-01-30T13:47:33.174924929Z" level=info msg="StartContainer for \"b1b0a71c43f2807f2e074b252ee160debcf2b665d39b7c72b6074d6569281aee\" returns successfully" Jan 30 13:47:33.183723 kubelet[2499]: E0130 13:47:33.182978 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:33.208440 systemd-networkd[1402]: cali7a7a84138b6: Link UP Jan 30 13:47:33.214442 systemd-networkd[1402]: cali7a7a84138b6: Gained carrier Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.109 [INFO][4755] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0 calico-apiserver-6589f8c55- calico-apiserver 313a082c-52c3-48c2-8128-4216401a9378 918 0 2025-01-30 13:46:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6589f8c55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6589f8c55-lv4xg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a7a84138b6 [] []}} ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-lv4xg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.109 [INFO][4755] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-lv4xg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.140 [INFO][4787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" HandleID="k8s-pod-network.fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.149 [INFO][4787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" HandleID="k8s-pod-network.fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dde20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6589f8c55-lv4xg", "timestamp":"2025-01-30 13:47:33.140908382 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.149 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.149 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.149 [INFO][4787] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.151 [INFO][4787] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" host="localhost" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.154 [INFO][4787] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.157 [INFO][4787] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.159 [INFO][4787] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.161 [INFO][4787] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.161 [INFO][4787] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" host="localhost" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.162 [INFO][4787] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.175 [INFO][4787] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" host="localhost" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.193 [INFO][4787] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" host="localhost" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.193 [INFO][4787] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" host="localhost" Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.193 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:33.257077 containerd[1460]: 2025-01-30 13:47:33.193 [INFO][4787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" HandleID="k8s-pod-network.fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:33.257843 containerd[1460]: 2025-01-30 13:47:33.199 [INFO][4755] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-lv4xg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0", GenerateName:"calico-apiserver-6589f8c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"313a082c-52c3-48c2-8128-4216401a9378", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6589f8c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6589f8c55-lv4xg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a7a84138b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:33.257843 containerd[1460]: 2025-01-30 13:47:33.199 [INFO][4755] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-lv4xg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:33.257843 containerd[1460]: 2025-01-30 13:47:33.199 [INFO][4755] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a7a84138b6 ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-lv4xg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:33.257843 containerd[1460]: 2025-01-30 13:47:33.214 [INFO][4755] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-lv4xg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:33.257843 containerd[1460]: 2025-01-30 13:47:33.215 [INFO][4755] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-lv4xg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0", GenerateName:"calico-apiserver-6589f8c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"313a082c-52c3-48c2-8128-4216401a9378", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6589f8c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc", Pod:"calico-apiserver-6589f8c55-lv4xg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a7a84138b6", MAC:"6e:3b:67:9b:5d:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:33.257843 containerd[1460]: 2025-01-30 13:47:33.241 [INFO][4755] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc" Namespace="calico-apiserver" Pod="calico-apiserver-6589f8c55-lv4xg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:33.297239 systemd-networkd[1402]: cali0cd220bb0ff: Gained IPv6LL Jan 30 13:47:33.309441 containerd[1460]: time="2025-01-30T13:47:33.309295687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:33.309441 containerd[1460]: time="2025-01-30T13:47:33.309402979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:33.309441 containerd[1460]: time="2025-01-30T13:47:33.309421775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:33.311130 containerd[1460]: time="2025-01-30T13:47:33.310623781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:33.346408 systemd[1]: Started cri-containerd-fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc.scope - libcontainer container fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc. Jan 30 13:47:33.361969 sshd[4745]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:33.373359 systemd[1]: sshd@11-10.0.0.69:22-10.0.0.1:56690.service: Deactivated successfully. Jan 30 13:47:33.376650 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:47:33.380694 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:47:33.388376 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:47:33.390504 systemd[1]: Started sshd@12-10.0.0.69:22-10.0.0.1:56698.service - OpenSSH per-connection server daemon (10.0.0.1:56698). Jan 30 13:47:33.391552 systemd-logind[1449]: Removed session 12. Jan 30 13:47:33.419856 containerd[1460]: time="2025-01-30T13:47:33.419807446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6589f8c55-lv4xg,Uid:313a082c-52c3-48c2-8128-4216401a9378,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc\"" Jan 30 13:47:33.432803 sshd[4875]: Accepted publickey for core from 10.0.0.1 port 56698 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:33.434845 sshd[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:33.440125 systemd-logind[1449]: New session 13 of user core. Jan 30 13:47:33.453141 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:47:33.587011 sshd[4875]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:33.590497 systemd[1]: sshd@12-10.0.0.69:22-10.0.0.1:56698.service: Deactivated successfully. Jan 30 13:47:33.592592 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:47:33.594742 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:47:33.596331 systemd-logind[1449]: Removed session 13. Jan 30 13:47:33.809242 systemd-networkd[1402]: calia88ea086f72: Gained IPv6LL Jan 30 13:47:33.969190 containerd[1460]: time="2025-01-30T13:47:33.969053700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:33.970208 containerd[1460]: time="2025-01-30T13:47:33.970170807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:47:33.971754 containerd[1460]: time="2025-01-30T13:47:33.971719554Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:33.973706 containerd[1460]: time="2025-01-30T13:47:33.973666740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:33.974290 containerd[1460]: time="2025-01-30T13:47:33.974257769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.875165602s" Jan 30 13:47:33.974290 containerd[1460]: time="2025-01-30T13:47:33.974284970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:47:33.975131 containerd[1460]: time="2025-01-30T13:47:33.975105160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:47:33.982832 containerd[1460]: time="2025-01-30T13:47:33.982795246Z" level=info msg="CreateContainer within sandbox \"ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:47:34.065140 systemd-networkd[1402]: calie196eb693ea: Gained IPv6LL Jan 30 13:47:34.185360 kubelet[2499]: E0130 13:47:34.185331 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:34.185886 kubelet[2499]: E0130 13:47:34.185331 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:34.312082 kubelet[2499]: I0130 13:47:34.309813 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wtmtq" podStartSLOduration=44.309796062 podStartE2EDuration="44.309796062s" podCreationTimestamp="2025-01-30 13:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:47:34.309277789 +0000 UTC m=+49.804199390" watchObservedRunningTime="2025-01-30 13:47:34.309796062 +0000 UTC m=+49.804717643" Jan 30 13:47:34.447531 containerd[1460]: time="2025-01-30T13:47:34.447456599Z" level=info msg="CreateContainer within sandbox \"ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d725ce9c66f3edbf8768b874def64d39eb0011adc1f5a09c75711bee8a87154b\"" Jan 30 13:47:34.448283 containerd[1460]: time="2025-01-30T13:47:34.448251802Z" level=info msg="StartContainer for \"d725ce9c66f3edbf8768b874def64d39eb0011adc1f5a09c75711bee8a87154b\"" Jan 30 13:47:34.475745 systemd[1]: run-containerd-runc-k8s.io-d725ce9c66f3edbf8768b874def64d39eb0011adc1f5a09c75711bee8a87154b-runc.MYbsHX.mount: Deactivated successfully. Jan 30 13:47:34.488157 systemd[1]: Started cri-containerd-d725ce9c66f3edbf8768b874def64d39eb0011adc1f5a09c75711bee8a87154b.scope - libcontainer container d725ce9c66f3edbf8768b874def64d39eb0011adc1f5a09c75711bee8a87154b. Jan 30 13:47:34.609469 containerd[1460]: time="2025-01-30T13:47:34.609308176Z" level=info msg="StartContainer for \"d725ce9c66f3edbf8768b874def64d39eb0011adc1f5a09c75711bee8a87154b\" returns successfully" Jan 30 13:47:35.188924 kubelet[2499]: E0130 13:47:35.188895 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:35.189760 kubelet[2499]: E0130 13:47:35.189746 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:35.202401 kubelet[2499]: I0130 13:47:35.202353 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-dcb74df-bvbvk" podStartSLOduration=34.50353236 podStartE2EDuration="38.202337709s" podCreationTimestamp="2025-01-30 13:46:57 +0000 UTC" firstStartedPulling="2025-01-30 13:47:30.276201275 +0000 UTC m=+45.771122856" lastFinishedPulling="2025-01-30 13:47:33.975006614 +0000 UTC m=+49.469928205" observedRunningTime="2025-01-30 13:47:35.201797937 +0000 UTC m=+50.696719518" watchObservedRunningTime="2025-01-30 13:47:35.202337709 +0000 UTC m=+50.697259290" Jan 30 13:47:35.218652 systemd-networkd[1402]: cali7a7a84138b6: Gained IPv6LL Jan 30 13:47:35.654388 containerd[1460]: time="2025-01-30T13:47:35.654329168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:35.655686 containerd[1460]: time="2025-01-30T13:47:35.655544880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:47:35.657300 containerd[1460]: time="2025-01-30T13:47:35.657266452Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:35.659580 containerd[1460]: time="2025-01-30T13:47:35.659538165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:35.660220 containerd[1460]: time="2025-01-30T13:47:35.660182895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.685051666s" Jan 30 13:47:35.660220 containerd[1460]: time="2025-01-30T13:47:35.660212981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:47:35.661197 containerd[1460]: time="2025-01-30T13:47:35.661167303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:47:35.661948 containerd[1460]: time="2025-01-30T13:47:35.661925797Z" level=info msg="CreateContainer within sandbox \"d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:47:35.679631 containerd[1460]: time="2025-01-30T13:47:35.679574133Z" level=info msg="CreateContainer within sandbox \"d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bbb7604a7b0ca7afa2ad96d877b5582b58a5891fb5bd075d37ca7c04c1b6e72b\"" Jan 30 13:47:35.680262 containerd[1460]: time="2025-01-30T13:47:35.680210538Z" level=info msg="StartContainer for \"bbb7604a7b0ca7afa2ad96d877b5582b58a5891fb5bd075d37ca7c04c1b6e72b\"" Jan 30 13:47:35.718568 systemd[1]: Started cri-containerd-bbb7604a7b0ca7afa2ad96d877b5582b58a5891fb5bd075d37ca7c04c1b6e72b.scope - libcontainer container bbb7604a7b0ca7afa2ad96d877b5582b58a5891fb5bd075d37ca7c04c1b6e72b. Jan 30 13:47:35.753041 containerd[1460]: time="2025-01-30T13:47:35.752967434Z" level=info msg="StartContainer for \"bbb7604a7b0ca7afa2ad96d877b5582b58a5891fb5bd075d37ca7c04c1b6e72b\" returns successfully" Jan 30 13:47:36.027845 kubelet[2499]: I0130 13:47:36.027712 2499 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:47:36.027845 kubelet[2499]: I0130 13:47:36.027743 2499 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:47:36.192293 kubelet[2499]: E0130 13:47:36.192264 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:36.378527 kubelet[2499]: I0130 13:47:36.378182 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t9h8r" podStartSLOduration=33.432335011 podStartE2EDuration="39.378163712s" podCreationTimestamp="2025-01-30 13:46:57 +0000 UTC" firstStartedPulling="2025-01-30 13:47:29.7151081 +0000 UTC m=+45.210029681" lastFinishedPulling="2025-01-30 13:47:35.660936801 +0000 UTC m=+51.155858382" observedRunningTime="2025-01-30 13:47:36.377783147 +0000 UTC m=+51.872704728" watchObservedRunningTime="2025-01-30 13:47:36.378163712 +0000 UTC m=+51.873085293" Jan 30 13:47:37.199436 kubelet[2499]: E0130 13:47:37.199400 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:38.598252 systemd[1]: Started sshd@13-10.0.0.69:22-10.0.0.1:57286.service - OpenSSH per-connection server daemon (10.0.0.1:57286). Jan 30 13:47:38.920297 sshd[5017]: Accepted publickey for core from 10.0.0.1 port 57286 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:38.922020 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:38.927330 systemd-logind[1449]: New session 14 of user core. Jan 30 13:47:38.935184 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:47:38.943096 containerd[1460]: time="2025-01-30T13:47:38.943041173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:38.943924 containerd[1460]: time="2025-01-30T13:47:38.943878274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:47:38.945051 containerd[1460]: time="2025-01-30T13:47:38.944982527Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:38.947375 containerd[1460]: time="2025-01-30T13:47:38.947334571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:38.947986 containerd[1460]: time="2025-01-30T13:47:38.947956528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.286757666s" Jan 30 13:47:38.947986 containerd[1460]: time="2025-01-30T13:47:38.947982036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:47:38.950127 containerd[1460]: time="2025-01-30T13:47:38.950097025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:47:38.951159 containerd[1460]: time="2025-01-30T13:47:38.951129542Z" level=info msg="CreateContainer within sandbox \"d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:47:39.045055 containerd[1460]: time="2025-01-30T13:47:39.044977246Z" level=info msg="CreateContainer within sandbox \"d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d33235c5196a6727e04f104ec9db5e7028a8ca87dbc0ce3e6596f4b7890d37a2\"" Jan 30 13:47:39.045801 containerd[1460]: time="2025-01-30T13:47:39.045763752Z" level=info msg="StartContainer for \"d33235c5196a6727e04f104ec9db5e7028a8ca87dbc0ce3e6596f4b7890d37a2\"" Jan 30 13:47:39.061267 sshd[5017]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:39.075095 systemd[1]: sshd@13-10.0.0.69:22-10.0.0.1:57286.service: Deactivated successfully. Jan 30 13:47:39.076974 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:47:39.079911 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:47:39.086224 systemd[1]: Started cri-containerd-d33235c5196a6727e04f104ec9db5e7028a8ca87dbc0ce3e6596f4b7890d37a2.scope - libcontainer container d33235c5196a6727e04f104ec9db5e7028a8ca87dbc0ce3e6596f4b7890d37a2. Jan 30 13:47:39.087202 systemd-logind[1449]: Removed session 14. Jan 30 13:47:39.250889 containerd[1460]: time="2025-01-30T13:47:39.250833164Z" level=info msg="StartContainer for \"d33235c5196a6727e04f104ec9db5e7028a8ca87dbc0ce3e6596f4b7890d37a2\" returns successfully" Jan 30 13:47:39.322056 containerd[1460]: time="2025-01-30T13:47:39.321960926Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:39.322461 containerd[1460]: time="2025-01-30T13:47:39.322368861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:47:39.325099 containerd[1460]: time="2025-01-30T13:47:39.325042198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 374.912511ms" Jan 30 13:47:39.325099 containerd[1460]: time="2025-01-30T13:47:39.325084037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:47:39.327323 containerd[1460]: time="2025-01-30T13:47:39.327289265Z" level=info msg="CreateContainer within sandbox \"fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:47:39.341841 containerd[1460]: time="2025-01-30T13:47:39.341745434Z" level=info msg="CreateContainer within sandbox \"fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f16790bfcd87d721fdcf170455b419f1ede6674085468de396b02bdfe05cd6e8\"" Jan 30 13:47:39.342518 containerd[1460]: time="2025-01-30T13:47:39.342493948Z" level=info msg="StartContainer for \"f16790bfcd87d721fdcf170455b419f1ede6674085468de396b02bdfe05cd6e8\"" Jan 30 13:47:39.375285 systemd[1]: Started cri-containerd-f16790bfcd87d721fdcf170455b419f1ede6674085468de396b02bdfe05cd6e8.scope - libcontainer container f16790bfcd87d721fdcf170455b419f1ede6674085468de396b02bdfe05cd6e8. Jan 30 13:47:39.417591 containerd[1460]: time="2025-01-30T13:47:39.417552467Z" level=info msg="StartContainer for \"f16790bfcd87d721fdcf170455b419f1ede6674085468de396b02bdfe05cd6e8\" returns successfully" Jan 30 13:47:40.467205 kubelet[2499]: I0130 13:47:40.467079 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6589f8c55-wzzjd" podStartSLOduration=36.585757702 podStartE2EDuration="43.467057727s" podCreationTimestamp="2025-01-30 13:46:57 +0000 UTC" firstStartedPulling="2025-01-30 13:47:32.068638293 +0000 UTC m=+47.563559874" lastFinishedPulling="2025-01-30 13:47:38.949938287 +0000 UTC m=+54.444859899" observedRunningTime="2025-01-30 13:47:40.352781982 +0000 UTC m=+55.847703563" watchObservedRunningTime="2025-01-30 13:47:40.467057727 +0000 UTC m=+55.961979308" Jan 30 13:47:40.756380 kubelet[2499]: I0130 13:47:40.756287 2499 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6589f8c55-lv4xg" podStartSLOduration=37.851411947 podStartE2EDuration="43.756271425s" podCreationTimestamp="2025-01-30 13:46:57 +0000 UTC" firstStartedPulling="2025-01-30 13:47:33.420974406 +0000 UTC m=+48.915895987" lastFinishedPulling="2025-01-30 13:47:39.325833884 +0000 UTC m=+54.820755465" observedRunningTime="2025-01-30 13:47:40.467385442 +0000 UTC m=+55.962307023" watchObservedRunningTime="2025-01-30 13:47:40.756271425 +0000 UTC m=+56.251193006" Jan 30 13:47:41.258500 kubelet[2499]: I0130 13:47:41.258455 2499 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:47:44.071969 systemd[1]: Started sshd@14-10.0.0.69:22-10.0.0.1:57302.service - OpenSSH per-connection server daemon (10.0.0.1:57302). Jan 30 13:47:44.114893 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 57302 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:44.116438 sshd[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:44.120413 systemd-logind[1449]: New session 15 of user core. Jan 30 13:47:44.130181 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:47:44.253646 sshd[5122]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:44.266236 systemd[1]: sshd@14-10.0.0.69:22-10.0.0.1:57302.service: Deactivated successfully. Jan 30 13:47:44.268147 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:47:44.269847 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:47:44.277261 systemd[1]: Started sshd@15-10.0.0.69:22-10.0.0.1:57308.service - OpenSSH per-connection server daemon (10.0.0.1:57308). Jan 30 13:47:44.278413 systemd-logind[1449]: Removed session 15. Jan 30 13:47:44.312961 sshd[5137]: Accepted publickey for core from 10.0.0.1 port 57308 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:44.314940 sshd[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:44.319652 systemd-logind[1449]: New session 16 of user core. Jan 30 13:47:44.330116 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:47:44.593920 containerd[1460]: time="2025-01-30T13:47:44.593186581Z" level=info msg="StopPodSandbox for \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\"" Jan 30 13:47:44.661677 sshd[5137]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.631 [WARNING][5165] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0", GenerateName:"calico-kube-controllers-dcb74df-", Namespace:"calico-system", SelfLink:"", UID:"77b64aea-0119-4203-b1a4-d349995e60a1", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dcb74df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e", Pod:"calico-kube-controllers-dcb74df-bvbvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d98e84fc8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.631 [INFO][5165] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.631 [INFO][5165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" iface="eth0" netns="" Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.631 [INFO][5165] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.631 [INFO][5165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.650 [INFO][5172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" HandleID="k8s-pod-network.37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.650 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.650 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.656 [WARNING][5172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" HandleID="k8s-pod-network.37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.656 [INFO][5172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" HandleID="k8s-pod-network.37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.657 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:44.664233 containerd[1460]: 2025-01-30 13:47:44.660 [INFO][5165] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:44.664604 containerd[1460]: time="2025-01-30T13:47:44.664268732Z" level=info msg="TearDown network for sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\" successfully" Jan 30 13:47:44.664604 containerd[1460]: time="2025-01-30T13:47:44.664304910Z" level=info msg="StopPodSandbox for \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\" returns successfully" Jan 30 13:47:44.669497 systemd[1]: sshd@15-10.0.0.69:22-10.0.0.1:57308.service: Deactivated successfully. Jan 30 13:47:44.671313 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:47:44.672062 containerd[1460]: time="2025-01-30T13:47:44.672025646Z" level=info msg="RemovePodSandbox for \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\"" Jan 30 13:47:44.672143 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:47:44.675250 containerd[1460]: time="2025-01-30T13:47:44.675225600Z" level=info msg="Forcibly stopping sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\"" Jan 30 13:47:44.680377 systemd[1]: Started sshd@16-10.0.0.69:22-10.0.0.1:57320.service - OpenSSH per-connection server daemon (10.0.0.1:57320). Jan 30 13:47:44.681727 systemd-logind[1449]: Removed session 16. Jan 30 13:47:44.722766 sshd[5184]: Accepted publickey for core from 10.0.0.1 port 57320 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:44.724508 sshd[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:44.729332 systemd-logind[1449]: New session 17 of user core. Jan 30 13:47:44.734186 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.713 [WARNING][5200] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0", GenerateName:"calico-kube-controllers-dcb74df-", Namespace:"calico-system", SelfLink:"", UID:"77b64aea-0119-4203-b1a4-d349995e60a1", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dcb74df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed976c13525a4d6f6cc4bacc198c6520fca0ba6d9df85d9ab55169967a342a5e", Pod:"calico-kube-controllers-dcb74df-bvbvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8d98e84fc8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.713 [INFO][5200] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.713 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" iface="eth0" netns="" Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.713 [INFO][5200] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.713 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.734 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" HandleID="k8s-pod-network.37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.734 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.734 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.739 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" HandleID="k8s-pod-network.37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.739 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" HandleID="k8s-pod-network.37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Workload="localhost-k8s-calico--kube--controllers--dcb74df--bvbvk-eth0" Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.740 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:44.745980 containerd[1460]: 2025-01-30 13:47:44.743 [INFO][5200] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81" Jan 30 13:47:44.746503 containerd[1460]: time="2025-01-30T13:47:44.746029799Z" level=info msg="TearDown network for sandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\" successfully" Jan 30 13:47:44.878355 containerd[1460]: time="2025-01-30T13:47:44.878231418Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:47:44.878355 containerd[1460]: time="2025-01-30T13:47:44.878315876Z" level=info msg="RemovePodSandbox \"37615a0699899f7eac7639d6da261bd2bd37b5331363db0e0880335909e67b81\" returns successfully" Jan 30 13:47:44.878953 containerd[1460]: time="2025-01-30T13:47:44.878906695Z" level=info msg="StopPodSandbox for \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\"" Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.917 [WARNING][5238] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7f34dd70-d328-492f-8c87-2756a28b76b5", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321", Pod:"coredns-668d6bf9bc-x2bcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cd220bb0ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.917 [INFO][5238] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.917 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" iface="eth0" netns="" Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.917 [INFO][5238] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.917 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.937 [INFO][5245] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" HandleID="k8s-pod-network.c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.937 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.937 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.942 [WARNING][5245] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" HandleID="k8s-pod-network.c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.942 [INFO][5245] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" HandleID="k8s-pod-network.c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.943 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:44.948460 containerd[1460]: 2025-01-30 13:47:44.945 [INFO][5238] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:44.948460 containerd[1460]: time="2025-01-30T13:47:44.948306438Z" level=info msg="TearDown network for sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\" successfully" Jan 30 13:47:44.948460 containerd[1460]: time="2025-01-30T13:47:44.948334631Z" level=info msg="StopPodSandbox for \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\" returns successfully" Jan 30 13:47:44.949397 containerd[1460]: time="2025-01-30T13:47:44.949370815Z" level=info msg="RemovePodSandbox for \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\"" Jan 30 13:47:44.949435 containerd[1460]: time="2025-01-30T13:47:44.949405200Z" level=info msg="Forcibly stopping sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\"" Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:44.983 [WARNING][5268] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7f34dd70-d328-492f-8c87-2756a28b76b5", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"839338719556303f9da63c8a1d50e2a6da0a98710ea3345ba09db0804cf66321", Pod:"coredns-668d6bf9bc-x2bcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0cd220bb0ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:44.983 [INFO][5268] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:44.983 [INFO][5268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" iface="eth0" netns="" Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:44.983 [INFO][5268] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:44.983 [INFO][5268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:45.003 [INFO][5275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" HandleID="k8s-pod-network.c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:45.003 [INFO][5275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:45.003 [INFO][5275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:45.007 [WARNING][5275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" HandleID="k8s-pod-network.c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:45.007 [INFO][5275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" HandleID="k8s-pod-network.c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Workload="localhost-k8s-coredns--668d6bf9bc--x2bcp-eth0" Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:45.008 [INFO][5275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:45.013804 containerd[1460]: 2025-01-30 13:47:45.011 [INFO][5268] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4" Jan 30 13:47:45.014426 containerd[1460]: time="2025-01-30T13:47:45.014392604Z" level=info msg="TearDown network for sandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\" successfully" Jan 30 13:47:45.019082 containerd[1460]: time="2025-01-30T13:47:45.019050293Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:47:45.019140 containerd[1460]: time="2025-01-30T13:47:45.019098674Z" level=info msg="RemovePodSandbox \"c19560c4a1bc4b6b2b02fdff7c476b00850322d2987c333733629a4c01f8e1d4\" returns successfully" Jan 30 13:47:45.019528 containerd[1460]: time="2025-01-30T13:47:45.019508473Z" level=info msg="StopPodSandbox for \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\"" Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.053 [WARNING][5297] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"84baafcc-c8f4-413e-80cf-ae1a5f5e4140", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f", Pod:"coredns-668d6bf9bc-wtmtq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie196eb693ea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.053 [INFO][5297] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.053 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" iface="eth0" netns="" Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.053 [INFO][5297] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.053 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.074 [INFO][5304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" HandleID="k8s-pod-network.e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.074 [INFO][5304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.074 [INFO][5304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.079 [WARNING][5304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" HandleID="k8s-pod-network.e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.079 [INFO][5304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" HandleID="k8s-pod-network.e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.080 [INFO][5304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:45.085984 containerd[1460]: 2025-01-30 13:47:45.083 [INFO][5297] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:45.086467 containerd[1460]: time="2025-01-30T13:47:45.086042937Z" level=info msg="TearDown network for sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\" successfully" Jan 30 13:47:45.086467 containerd[1460]: time="2025-01-30T13:47:45.086069838Z" level=info msg="StopPodSandbox for \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\" returns successfully" Jan 30 13:47:45.086521 containerd[1460]: time="2025-01-30T13:47:45.086506276Z" level=info msg="RemovePodSandbox for \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\"" Jan 30 13:47:45.086562 containerd[1460]: time="2025-01-30T13:47:45.086536312Z" level=info msg="Forcibly stopping sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\"" Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.122 [WARNING][5327] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"84baafcc-c8f4-413e-80cf-ae1a5f5e4140", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd018668b47892f6601be738d8783f21eb86e07041ade95a33b7aa7da6c6cd1f", Pod:"coredns-668d6bf9bc-wtmtq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie196eb693ea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.122 [INFO][5327] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.122 [INFO][5327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" iface="eth0" netns="" Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.122 [INFO][5327] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.122 [INFO][5327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.142 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" HandleID="k8s-pod-network.e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.142 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.142 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.174 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" HandleID="k8s-pod-network.e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.174 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" HandleID="k8s-pod-network.e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Workload="localhost-k8s-coredns--668d6bf9bc--wtmtq-eth0" Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.176 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:45.181441 containerd[1460]: 2025-01-30 13:47:45.178 [INFO][5327] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8" Jan 30 13:47:45.181441 containerd[1460]: time="2025-01-30T13:47:45.181419166Z" level=info msg="TearDown network for sandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\" successfully" Jan 30 13:47:45.223547 containerd[1460]: time="2025-01-30T13:47:45.223480822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:47:45.223719 containerd[1460]: time="2025-01-30T13:47:45.223558859Z" level=info msg="RemovePodSandbox \"e0e29e67bbf5f9bf83a7fcae1bb3b2170e638ad0d394f9d759c0ee243195a2a8\" returns successfully" Jan 30 13:47:45.224078 containerd[1460]: time="2025-01-30T13:47:45.224059578Z" level=info msg="StopPodSandbox for \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\"" Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.257 [WARNING][5356] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t9h8r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3feaebfa-27ef-455c-82db-977542f57659", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17", Pod:"csi-node-driver-t9h8r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali937ff0a1d68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.257 [INFO][5356] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.257 [INFO][5356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" iface="eth0" netns="" Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.257 [INFO][5356] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.257 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.278 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" HandleID="k8s-pod-network.d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.279 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.279 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.283 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" HandleID="k8s-pod-network.d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.283 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" HandleID="k8s-pod-network.d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.285 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:45.289470 containerd[1460]: 2025-01-30 13:47:45.287 [INFO][5356] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:45.289869 containerd[1460]: time="2025-01-30T13:47:45.289507784Z" level=info msg="TearDown network for sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\" successfully" Jan 30 13:47:45.289869 containerd[1460]: time="2025-01-30T13:47:45.289532691Z" level=info msg="StopPodSandbox for \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\" returns successfully" Jan 30 13:47:45.290016 containerd[1460]: time="2025-01-30T13:47:45.289976324Z" level=info msg="RemovePodSandbox for \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\"" Jan 30 13:47:45.290091 containerd[1460]: time="2025-01-30T13:47:45.290018864Z" level=info msg="Forcibly stopping sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\"" Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.322 [WARNING][5385] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t9h8r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3feaebfa-27ef-455c-82db-977542f57659", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d964fd9cdf34d051e8408018d4c6c31e61c37a65d209e38eea8271e9af7b9d17", Pod:"csi-node-driver-t9h8r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali937ff0a1d68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.322 [INFO][5385] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.322 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" iface="eth0" netns="" Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.322 [INFO][5385] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.322 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.346 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" HandleID="k8s-pod-network.d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.346 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.346 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.425 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" HandleID="k8s-pod-network.d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.425 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" HandleID="k8s-pod-network.d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Workload="localhost-k8s-csi--node--driver--t9h8r-eth0" Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.426 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:45.431628 containerd[1460]: 2025-01-30 13:47:45.429 [INFO][5385] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456" Jan 30 13:47:45.431628 containerd[1460]: time="2025-01-30T13:47:45.431585275Z" level=info msg="TearDown network for sandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\" successfully" Jan 30 13:47:45.593985 containerd[1460]: time="2025-01-30T13:47:45.593926344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:47:45.593985 containerd[1460]: time="2025-01-30T13:47:45.594013087Z" level=info msg="RemovePodSandbox \"d9f45a21ab96acf3922df87a04b37a21c733c0d64a8fa4672945ad8224241456\" returns successfully" Jan 30 13:47:45.594509 containerd[1460]: time="2025-01-30T13:47:45.594482317Z" level=info msg="StopPodSandbox for \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\"" Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.628 [WARNING][5423] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0", GenerateName:"calico-apiserver-6589f8c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"313a082c-52c3-48c2-8128-4216401a9378", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6589f8c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc", Pod:"calico-apiserver-6589f8c55-lv4xg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a7a84138b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.628 [INFO][5423] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.628 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" iface="eth0" netns="" Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.628 [INFO][5423] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.628 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.654 [INFO][5431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" HandleID="k8s-pod-network.fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.654 [INFO][5431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.655 [INFO][5431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.662 [WARNING][5431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" HandleID="k8s-pod-network.fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.663 [INFO][5431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" HandleID="k8s-pod-network.fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.665 [INFO][5431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:45.670212 containerd[1460]: 2025-01-30 13:47:45.667 [INFO][5423] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:45.671136 containerd[1460]: time="2025-01-30T13:47:45.670249223Z" level=info msg="TearDown network for sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\" successfully" Jan 30 13:47:45.671136 containerd[1460]: time="2025-01-30T13:47:45.670284679Z" level=info msg="StopPodSandbox for \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\" returns successfully" Jan 30 13:47:45.671136 containerd[1460]: time="2025-01-30T13:47:45.670811177Z" level=info msg="RemovePodSandbox for \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\"" Jan 30 13:47:45.671136 containerd[1460]: time="2025-01-30T13:47:45.670841053Z" level=info msg="Forcibly stopping sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\"" Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.772 [WARNING][5454] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0", GenerateName:"calico-apiserver-6589f8c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"313a082c-52c3-48c2-8128-4216401a9378", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6589f8c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe637e8ec278f8389382c3e2b172a7ae8c35d307652dda95fa3d97dea9e841dc", Pod:"calico-apiserver-6589f8c55-lv4xg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a7a84138b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.772 [INFO][5454] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.772 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" iface="eth0" netns="" Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.772 [INFO][5454] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.772 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.795 [INFO][5462] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" HandleID="k8s-pod-network.fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.795 [INFO][5462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.795 [INFO][5462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.801 [WARNING][5462] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" HandleID="k8s-pod-network.fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.801 [INFO][5462] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" HandleID="k8s-pod-network.fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Workload="localhost-k8s-calico--apiserver--6589f8c55--lv4xg-eth0" Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.803 [INFO][5462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:45.810838 containerd[1460]: 2025-01-30 13:47:45.807 [INFO][5454] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a" Jan 30 13:47:45.812311 containerd[1460]: time="2025-01-30T13:47:45.811533906Z" level=info msg="TearDown network for sandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\" successfully" Jan 30 13:47:45.825947 containerd[1460]: time="2025-01-30T13:47:45.825887487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:47:45.826115 containerd[1460]: time="2025-01-30T13:47:45.825969340Z" level=info msg="RemovePodSandbox \"fd2dc5baa492da0c72afeb37fc0c100d5005bf04d72e72a317f9f4aab991d24a\" returns successfully" Jan 30 13:47:45.826963 containerd[1460]: time="2025-01-30T13:47:45.826534059Z" level=info msg="StopPodSandbox for \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\"" Jan 30 13:47:45.837232 sshd[5184]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:45.853387 systemd[1]: sshd@16-10.0.0.69:22-10.0.0.1:57320.service: Deactivated successfully. Jan 30 13:47:45.861403 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:47:45.867154 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:47:45.884131 systemd[1]: Started sshd@17-10.0.0.69:22-10.0.0.1:57326.service - OpenSSH per-connection server daemon (10.0.0.1:57326). Jan 30 13:47:45.888160 systemd-logind[1449]: Removed session 17. Jan 30 13:47:45.923767 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 57326 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:45.925490 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:45.930743 systemd-logind[1449]: New session 18 of user core. Jan 30 13:47:45.938266 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.914 [WARNING][5487] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0", GenerateName:"calico-apiserver-6589f8c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"b735763f-2a7c-4c9a-9b44-d3680f2a86f5", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6589f8c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176", Pod:"calico-apiserver-6589f8c55-wzzjd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia88ea086f72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.914 [INFO][5487] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.914 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" iface="eth0" netns="" Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.914 [INFO][5487] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.914 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.932 [INFO][5499] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" HandleID="k8s-pod-network.80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.933 [INFO][5499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.933 [INFO][5499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.938 [WARNING][5499] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" HandleID="k8s-pod-network.80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.938 [INFO][5499] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" HandleID="k8s-pod-network.80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.939 [INFO][5499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:45.943936 containerd[1460]: 2025-01-30 13:47:45.941 [INFO][5487] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:45.944445 containerd[1460]: time="2025-01-30T13:47:45.943965518Z" level=info msg="TearDown network for sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\" successfully" Jan 30 13:47:45.944445 containerd[1460]: time="2025-01-30T13:47:45.944003860Z" level=info msg="StopPodSandbox for \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\" returns successfully" Jan 30 13:47:45.944500 containerd[1460]: time="2025-01-30T13:47:45.944439488Z" level=info msg="RemovePodSandbox for \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\"" Jan 30 13:47:45.944500 containerd[1460]: time="2025-01-30T13:47:45.944463463Z" level=info msg="Forcibly stopping sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\"" Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:45.977 [WARNING][5525] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0", GenerateName:"calico-apiserver-6589f8c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"b735763f-2a7c-4c9a-9b44-d3680f2a86f5", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 46, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6589f8c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d49b51f7cf134769cef526d6b4fbd63a68328ff3b1489a43f338ce35ac21f176", Pod:"calico-apiserver-6589f8c55-wzzjd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia88ea086f72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:45.977 [INFO][5525] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:45.977 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" iface="eth0" netns="" Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:45.977 [INFO][5525] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:45.977 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:45.998 [INFO][5532] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" HandleID="k8s-pod-network.80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:45.999 [INFO][5532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:45.999 [INFO][5532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:46.004 [WARNING][5532] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" HandleID="k8s-pod-network.80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:46.004 [INFO][5532] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" HandleID="k8s-pod-network.80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Workload="localhost-k8s-calico--apiserver--6589f8c55--wzzjd-eth0" Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:46.005 [INFO][5532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:47:46.010475 containerd[1460]: 2025-01-30 13:47:46.008 [INFO][5525] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac" Jan 30 13:47:46.010861 containerd[1460]: time="2025-01-30T13:47:46.010502617Z" level=info msg="TearDown network for sandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\" successfully" Jan 30 13:47:46.014802 containerd[1460]: time="2025-01-30T13:47:46.014760865Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:47:46.014876 containerd[1460]: time="2025-01-30T13:47:46.014810929Z" level=info msg="RemovePodSandbox \"80069364465dda1d4a75c8caeafe6a7c1d096e755e7255c56400dd8a6735e4ac\" returns successfully" Jan 30 13:47:46.318025 sshd[5494]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:46.328135 systemd[1]: sshd@17-10.0.0.69:22-10.0.0.1:57326.service: Deactivated successfully. Jan 30 13:47:46.330080 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:47:46.331475 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:47:46.341310 systemd[1]: Started sshd@18-10.0.0.69:22-10.0.0.1:57332.service - OpenSSH per-connection server daemon (10.0.0.1:57332). Jan 30 13:47:46.342352 systemd-logind[1449]: Removed session 18. Jan 30 13:47:46.376457 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 57332 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:46.378338 sshd[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:46.383124 systemd-logind[1449]: New session 19 of user core. Jan 30 13:47:46.390231 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:47:46.497531 sshd[5550]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:46.501502 systemd[1]: sshd@18-10.0.0.69:22-10.0.0.1:57332.service: Deactivated successfully. Jan 30 13:47:46.504337 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:47:46.505052 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:47:46.505930 systemd-logind[1449]: Removed session 19. Jan 30 13:47:51.514196 systemd[1]: Started sshd@19-10.0.0.69:22-10.0.0.1:41112.service - OpenSSH per-connection server daemon (10.0.0.1:41112). Jan 30 13:47:51.552370 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 41112 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:51.554410 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:51.558537 systemd-logind[1449]: New session 20 of user core. Jan 30 13:47:51.566135 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:47:51.674616 sshd[5589]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:51.678077 systemd[1]: sshd@19-10.0.0.69:22-10.0.0.1:41112.service: Deactivated successfully. Jan 30 13:47:51.680009 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:47:51.680617 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:47:51.681357 systemd-logind[1449]: Removed session 20. Jan 30 13:47:52.594351 kubelet[2499]: E0130 13:47:52.594317 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:56.689630 systemd[1]: Started sshd@20-10.0.0.69:22-10.0.0.1:41122.service - OpenSSH per-connection server daemon (10.0.0.1:41122). Jan 30 13:47:56.726406 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 41122 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:56.727974 sshd[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:56.731676 systemd-logind[1449]: New session 21 of user core. Jan 30 13:47:56.738114 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:47:56.843405 sshd[5609]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:56.846818 systemd[1]: sshd@20-10.0.0.69:22-10.0.0.1:41122.service: Deactivated successfully. Jan 30 13:47:56.848793 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:47:56.849342 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:47:56.850264 systemd-logind[1449]: Removed session 21. Jan 30 13:47:57.172979 kubelet[2499]: E0130 13:47:57.172952 2499 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:01.855808 systemd[1]: Started sshd@21-10.0.0.69:22-10.0.0.1:48414.service - OpenSSH per-connection server daemon (10.0.0.1:48414). Jan 30 13:48:01.898307 sshd[5647]: Accepted publickey for core from 10.0.0.1 port 48414 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:01.900087 sshd[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:01.904154 systemd-logind[1449]: New session 22 of user core. Jan 30 13:48:01.911172 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:48:02.037357 sshd[5647]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:02.041578 systemd[1]: sshd@21-10.0.0.69:22-10.0.0.1:48414.service: Deactivated successfully. Jan 30 13:48:02.043567 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:48:02.044268 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:48:02.045184 systemd-logind[1449]: Removed session 22. Jan 30 13:48:07.048698 systemd[1]: Started sshd@22-10.0.0.69:22-10.0.0.1:48430.service - OpenSSH per-connection server daemon (10.0.0.1:48430). Jan 30 13:48:07.086128 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 48430 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:07.087612 sshd[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:07.091300 systemd-logind[1449]: New session 23 of user core. Jan 30 13:48:07.104119 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:48:07.211823 sshd[5682]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:07.215631 systemd[1]: sshd@22-10.0.0.69:22-10.0.0.1:48430.service: Deactivated successfully. Jan 30 13:48:07.217556 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:48:07.218238 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:48:07.219155 systemd-logind[1449]: Removed session 23.