Jan 29 11:51:45.927551 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:51:45.927594 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:51:45.927610 kernel: BIOS-provided physical RAM map: Jan 29 11:51:45.927619 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:51:45.927627 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:51:45.927636 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:51:45.927645 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 11:51:45.927654 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 11:51:45.927662 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:51:45.927674 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 11:51:45.927682 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:51:45.927690 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:51:45.927703 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:51:45.927712 kernel: NX (Execute Disable) protection: active Jan 29 11:51:45.927723 kernel: APIC: Static calls initialized Jan 29 11:51:45.927738 kernel: SMBIOS 2.8 present. Jan 29 11:51:45.927748 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 11:51:45.927757 kernel: Hypervisor detected: KVM Jan 29 11:51:45.927766 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:51:45.927775 kernel: kvm-clock: using sched offset of 2497838087 cycles Jan 29 11:51:45.927785 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:51:45.927794 kernel: tsc: Detected 2794.750 MHz processor Jan 29 11:51:45.927804 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:51:45.927814 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:51:45.927823 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 11:51:45.927836 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:51:45.927845 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:51:45.927854 kernel: Using GB pages for direct mapping Jan 29 11:51:45.927863 kernel: ACPI: Early table checksum verification disabled Jan 29 11:51:45.927872 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 11:51:45.927882 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:51:45.927891 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:51:45.927900 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:51:45.927927 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 11:51:45.927937 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:51:45.927946 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:51:45.927955 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:51:45.927965 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:51:45.927975 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 11:51:45.927984 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 11:51:45.927999 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 11:51:45.928011 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 11:51:45.928020 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 11:51:45.928030 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 11:51:45.928039 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 11:51:45.928053 kernel: No NUMA configuration found Jan 29 11:51:45.928063 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 11:51:45.928073 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 11:51:45.928086 kernel: Zone ranges: Jan 29 11:51:45.928096 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:51:45.928105 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 11:51:45.928114 kernel: Normal empty Jan 29 11:51:45.928124 kernel: Movable zone start for each node Jan 29 11:51:45.928133 kernel: Early memory node ranges Jan 29 11:51:45.928142 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:51:45.928151 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 11:51:45.928160 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 11:51:45.928172 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:51:45.928184 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:51:45.928194 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 11:51:45.928203 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:51:45.928213 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:51:45.928223 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:51:45.928244 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:51:45.928263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:51:45.928282 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:51:45.928297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:51:45.928306 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:51:45.928315 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:51:45.928325 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:51:45.928334 kernel: TSC deadline timer available Jan 29 11:51:45.928348 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:51:45.928359 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:51:45.928368 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:51:45.928381 kernel: kvm-guest: setup PV sched yield Jan 29 11:51:45.928395 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 11:51:45.928404 kernel: Booting paravirtualized kernel on KVM Jan 29 11:51:45.928414 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:51:45.928424 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:51:45.928433 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:51:45.928442 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:51:45.928452 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:51:45.928461 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:51:45.928470 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:51:45.928486 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:51:45.928496 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:51:45.928505 kernel: random: crng init done Jan 29 11:51:45.928514 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:51:45.928524 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:51:45.928534 kernel: Fallback order for Node 0: 0 Jan 29 11:51:45.928543 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 11:51:45.928553 kernel: Policy zone: DMA32 Jan 29 11:51:45.928581 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:51:45.928592 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 29 11:51:45.928602 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:51:45.928612 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:51:45.928621 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:51:45.928631 kernel: Dynamic Preempt: voluntary Jan 29 11:51:45.928640 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:51:45.928651 kernel: rcu: RCU event tracing is enabled. Jan 29 11:51:45.928661 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:51:45.928675 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:51:45.928685 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:51:45.928695 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:51:45.928705 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:51:45.928717 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:51:45.928727 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:51:45.928737 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:51:45.928746 kernel: Console: colour VGA+ 80x25 Jan 29 11:51:45.928756 kernel: printk: console [ttyS0] enabled Jan 29 11:51:45.928765 kernel: ACPI: Core revision 20230628 Jan 29 11:51:45.928778 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:51:45.928788 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:51:45.928797 kernel: x2apic enabled Jan 29 11:51:45.928807 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:51:45.928816 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:51:45.928826 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:51:45.928836 kernel: kvm-guest: setup PV IPIs Jan 29 11:51:45.928858 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:51:45.928868 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:51:45.928878 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 11:51:45.928888 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:51:45.928901 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:51:45.928911 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:51:45.928929 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:51:45.928939 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:51:45.928950 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:51:45.928963 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:51:45.928973 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:51:45.928986 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:51:45.928996 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:51:45.929007 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:51:45.929018 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:51:45.929031 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:51:45.929041 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:51:45.929053 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:51:45.929063 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:51:45.929073 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:51:45.929083 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:51:45.929093 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:51:45.929104 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:51:45.929113 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:51:45.929124 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:51:45.929134 kernel: landlock: Up and running. Jan 29 11:51:45.929147 kernel: SELinux: Initializing. Jan 29 11:51:45.929157 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:51:45.929166 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:51:45.929176 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:51:45.929186 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:51:45.929196 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:51:45.929205 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:51:45.929216 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:51:45.929229 kernel: ... version: 0 Jan 29 11:51:45.929242 kernel: ... bit width: 48 Jan 29 11:51:45.929252 kernel: ... generic registers: 6 Jan 29 11:51:45.929261 kernel: ... value mask: 0000ffffffffffff Jan 29 11:51:45.929271 kernel: ... max period: 00007fffffffffff Jan 29 11:51:45.929282 kernel: ... fixed-purpose events: 0 Jan 29 11:51:45.929292 kernel: ... event mask: 000000000000003f Jan 29 11:51:45.929302 kernel: signal: max sigframe size: 1776 Jan 29 11:51:45.929312 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:51:45.929323 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:51:45.929336 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:51:45.929347 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:51:45.929357 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:51:45.929367 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:51:45.929377 kernel: smpboot: Max logical packages: 1 Jan 29 11:51:45.929388 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 11:51:45.929398 kernel: devtmpfs: initialized Jan 29 11:51:45.929408 kernel: x86/mm: Memory block size: 128MB Jan 29 11:51:45.929419 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:51:45.929429 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:51:45.929442 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:51:45.929452 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:51:45.929463 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:51:45.929473 kernel: audit: type=2000 audit(1738151505.432:1): state=initialized audit_enabled=0 res=1 Jan 29 11:51:45.929484 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:51:45.929494 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:51:45.929504 kernel: cpuidle: using governor menu Jan 29 11:51:45.929515 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:51:45.929527 kernel: dca service started, version 1.12.1 Jan 29 11:51:45.929538 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:51:45.929548 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:51:45.929558 kernel: PCI: Using configuration type 1 for base access Jan 29 11:51:45.929588 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:51:45.929598 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:51:45.929608 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:51:45.929618 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:51:45.929627 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:51:45.929642 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:51:45.929652 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:51:45.929662 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:51:45.929672 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:51:45.929682 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:51:45.929692 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:51:45.929702 kernel: ACPI: Interpreter enabled Jan 29 11:51:45.929712 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:51:45.929722 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:51:45.929732 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:51:45.929746 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:51:45.929756 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:51:45.929766 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:51:45.930017 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:51:45.930153 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:51:45.930291 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:51:45.930304 kernel: PCI host bridge to bus 0000:00 Jan 29 11:51:45.930494 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:51:45.930677 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:51:45.930824 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:51:45.930986 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:51:45.931139 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:51:45.931272 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 11:51:45.931429 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:51:45.931645 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:51:45.931830 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:51:45.931989 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 11:51:45.932143 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 11:51:45.932295 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 11:51:45.932464 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:51:45.932687 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:51:45.932864 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 11:51:45.933059 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 11:51:45.933238 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 11:51:45.933422 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:51:45.933631 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:51:45.933780 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 11:51:45.933965 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 11:51:45.934155 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:51:45.934326 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 11:51:45.934496 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 11:51:45.934680 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 11:51:45.934844 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 11:51:45.935056 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:51:45.935241 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:51:45.935431 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:51:45.935677 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 11:51:45.935855 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 11:51:45.936056 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:51:45.936217 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 11:51:45.936240 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:51:45.936252 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:51:45.936262 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:51:45.936273 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:51:45.936284 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:51:45.936295 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:51:45.936305 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:51:45.936316 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:51:45.936327 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:51:45.936342 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:51:45.936353 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:51:45.936364 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:51:45.936374 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:51:45.936386 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:51:45.936396 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:51:45.936407 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:51:45.936418 kernel: iommu: Default domain type: Translated Jan 29 11:51:45.936429 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:51:45.936444 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:51:45.936455 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:51:45.936467 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:51:45.936478 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 11:51:45.936674 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:51:45.936846 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:51:45.937026 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:51:45.937043 kernel: vgaarb: loaded Jan 29 11:51:45.937060 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:51:45.937071 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:51:45.937081 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:51:45.937092 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:51:45.937103 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:51:45.937114 kernel: pnp: PnP ACPI init Jan 29 11:51:45.937307 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:51:45.937325 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:51:45.937336 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:51:45.937351 kernel: NET: Registered PF_INET protocol family Jan 29 11:51:45.937362 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:51:45.937372 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:51:45.937383 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:51:45.937393 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:51:45.937404 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:51:45.937415 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:51:45.937426 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:51:45.937441 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:51:45.937451 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:51:45.937461 kernel: NET: Registered PF_XDP protocol family Jan 29 11:51:45.937688 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:51:45.937838 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:51:45.937996 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:51:45.938148 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:51:45.938298 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:51:45.938443 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 11:51:45.938464 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:51:45.938475 kernel: Initialise system trusted keyrings Jan 29 11:51:45.938486 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:51:45.938496 kernel: Key type asymmetric registered Jan 29 11:51:45.938507 kernel: Asymmetric key parser 'x509' registered Jan 29 11:51:45.938518 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:51:45.938528 kernel: io scheduler mq-deadline registered Jan 29 11:51:45.938539 kernel: io scheduler kyber registered Jan 29 11:51:45.938550 kernel: io scheduler bfq registered Jan 29 11:51:45.938581 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:51:45.938592 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:51:45.938604 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:51:45.938614 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:51:45.938627 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:51:45.938638 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:51:45.938650 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:51:45.938662 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:51:45.938674 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:51:45.938855 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:51:45.938872 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:51:45.939035 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:51:45.939189 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:51:45 UTC (1738151505) Jan 29 11:51:45.939340 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:51:45.939356 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:51:45.939367 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:51:45.939377 kernel: Segment Routing with IPv6 Jan 29 11:51:45.939393 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:51:45.939403 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:51:45.939414 kernel: Key type dns_resolver registered Jan 29 11:51:45.939424 kernel: IPI shorthand broadcast: enabled Jan 29 11:51:45.939436 kernel: sched_clock: Marking stable (674002945, 112844985)->(851201072, -64353142) Jan 29 11:51:45.939447 kernel: registered taskstats version 1 Jan 29 11:51:45.939457 kernel: Loading compiled-in X.509 certificates Jan 29 11:51:45.939468 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:51:45.939479 kernel: Key type .fscrypt registered Jan 29 11:51:45.939494 kernel: Key type fscrypt-provisioning registered Jan 29 11:51:45.939505 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:51:45.939514 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:51:45.939525 kernel: ima: No architecture policies found Jan 29 11:51:45.939536 kernel: clk: Disabling unused clocks Jan 29 11:51:45.939547 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:51:45.939557 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:51:45.939611 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:51:45.939627 kernel: Run /init as init process Jan 29 11:51:45.939637 kernel: with arguments: Jan 29 11:51:45.939648 kernel: /init Jan 29 11:51:45.939658 kernel: with environment: Jan 29 11:51:45.939669 kernel: HOME=/ Jan 29 11:51:45.939679 kernel: TERM=linux Jan 29 11:51:45.939690 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:51:45.939703 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:51:45.939717 systemd[1]: Detected virtualization kvm. Jan 29 11:51:45.939734 systemd[1]: Detected architecture x86-64. Jan 29 11:51:45.939798 systemd[1]: Running in initrd. Jan 29 11:51:45.939865 systemd[1]: No hostname configured, using default hostname. Jan 29 11:51:45.939899 systemd[1]: Hostname set to . Jan 29 11:51:45.939908 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:51:45.939936 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:51:45.939948 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:51:45.939972 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:51:45.940029 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:51:45.940091 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:51:45.940108 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:51:45.940120 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:51:45.940139 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:51:45.940152 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:51:45.940164 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:51:45.940176 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:51:45.940189 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:51:45.940201 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:51:45.940213 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:51:45.940225 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:51:45.940236 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:51:45.940252 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:51:45.940264 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:51:45.940276 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:51:45.940287 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:51:45.940298 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:51:45.940308 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:51:45.940320 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:51:45.940332 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:51:45.940348 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:51:45.940360 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:51:45.940372 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:51:45.940383 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:51:45.940396 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:51:45.940408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:51:45.940420 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:51:45.940432 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:51:45.940444 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:51:45.940461 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:51:45.940505 systemd-journald[192]: Collecting audit messages is disabled. Jan 29 11:51:45.940537 systemd-journald[192]: Journal started Jan 29 11:51:45.940584 systemd-journald[192]: Runtime Journal (/run/log/journal/6259f1cd712645c8bf09cbcf0934bed2) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:51:45.926051 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 11:51:45.960887 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:51:45.960934 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:51:45.960952 kernel: Bridge firewalling registered Jan 29 11:51:45.953353 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 11:51:45.961236 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:51:45.965648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:51:45.975748 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:51:45.976586 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:51:45.981798 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:51:45.991622 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:51:45.994795 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:51:45.997437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:51:46.000706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:51:46.005191 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:51:46.008702 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:51:46.010370 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:51:46.019734 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:51:46.027882 dracut-cmdline[225]: dracut-dracut-053 Jan 29 11:51:46.032532 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:51:46.066380 systemd-resolved[226]: Positive Trust Anchors: Jan 29 11:51:46.066401 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:51:46.066442 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:51:46.079415 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 29 11:51:46.081895 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:51:46.084585 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:51:46.145605 kernel: SCSI subsystem initialized Jan 29 11:51:46.157594 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:51:46.171599 kernel: iscsi: registered transport (tcp) Jan 29 11:51:46.199597 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:51:46.199635 kernel: QLogic iSCSI HBA Driver Jan 29 11:51:46.254109 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:51:46.266765 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:51:46.297681 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:51:46.297757 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:51:46.298780 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:51:46.344623 kernel: raid6: avx2x4 gen() 28800 MB/s Jan 29 11:51:46.361617 kernel: raid6: avx2x2 gen() 29945 MB/s Jan 29 11:51:46.378755 kernel: raid6: avx2x1 gen() 25935 MB/s Jan 29 11:51:46.378840 kernel: raid6: using algorithm avx2x2 gen() 29945 MB/s Jan 29 11:51:46.396699 kernel: raid6: .... xor() 19921 MB/s, rmw enabled Jan 29 11:51:46.396773 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:51:46.416609 kernel: xor: automatically using best checksumming function avx Jan 29 11:51:46.567621 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:51:46.584106 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:51:46.593831 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:51:46.613138 systemd-udevd[411]: Using default interface naming scheme 'v255'. Jan 29 11:51:46.619689 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:51:46.634808 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:51:46.655408 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 29 11:51:46.697491 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:51:46.709791 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:51:46.778390 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:51:46.784749 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:51:46.812149 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:51:46.813692 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:51:46.818052 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:51:46.825011 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:51:46.833582 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:51:46.833740 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:51:46.863879 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:51:46.864097 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:51:46.864113 kernel: GPT:9289727 != 19775487 Jan 29 11:51:46.864137 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:51:46.864151 kernel: GPT:9289727 != 19775487 Jan 29 11:51:46.864165 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:51:46.864179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:51:46.838829 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:51:46.864419 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:51:46.874513 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:51:46.874559 kernel: AES CTR mode by8 optimization enabled Jan 29 11:51:46.867846 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:51:46.867962 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:51:46.872795 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:51:46.874547 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:51:46.878533 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:51:46.882255 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:51:46.891586 kernel: libata version 3.00 loaded. Jan 29 11:51:46.894938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:51:46.903327 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:51:46.933109 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:51:46.933130 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:51:46.933327 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:51:46.933509 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (463) Jan 29 11:51:46.933524 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (473) Jan 29 11:51:46.933538 kernel: scsi host0: ahci Jan 29 11:51:46.933769 kernel: scsi host1: ahci Jan 29 11:51:46.934397 kernel: scsi host2: ahci Jan 29 11:51:46.934639 kernel: scsi host3: ahci Jan 29 11:51:46.936460 kernel: scsi host4: ahci Jan 29 11:51:46.937865 kernel: scsi host5: ahci Jan 29 11:51:46.938037 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 11:51:46.938050 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 11:51:46.938060 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 11:51:46.938082 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 11:51:46.938096 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 11:51:46.938107 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 11:51:46.934586 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:51:46.967740 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:51:46.969431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:51:46.979033 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:51:46.981686 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:51:46.989681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:51:47.003986 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:51:47.007473 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:51:47.018848 disk-uuid[567]: Primary Header is updated. Jan 29 11:51:47.018848 disk-uuid[567]: Secondary Entries is updated. Jan 29 11:51:47.018848 disk-uuid[567]: Secondary Header is updated. Jan 29 11:51:47.023732 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:51:47.029593 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:51:47.031145 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:51:47.245610 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:51:47.245714 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:51:47.246592 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:51:47.247598 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:51:47.248589 kernel: ata3.00: applying bridge limits Jan 29 11:51:47.248603 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:51:47.249603 kernel: ata3.00: configured for UDMA/100 Jan 29 11:51:47.250594 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:51:47.251918 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:51:47.252597 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:51:47.295598 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:51:47.310455 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:51:47.310477 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:51:48.030249 disk-uuid[569]: The operation has completed successfully. Jan 29 11:51:48.031674 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:51:48.060484 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:51:48.060636 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:51:48.088791 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:51:48.092497 sh[591]: Success Jan 29 11:51:48.105602 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:51:48.144594 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:51:48.158602 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:51:48.162619 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:51:48.183170 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:51:48.183233 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:51:48.183245 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:51:48.185000 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:51:48.185025 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:51:48.190346 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:51:48.191188 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:51:48.198758 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:51:48.201262 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:51:48.210602 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:51:48.210653 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:51:48.212120 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:51:48.214598 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:51:48.225794 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:51:48.227729 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:51:48.238323 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:51:48.243752 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:51:48.366385 ignition[681]: Ignition 2.19.0 Jan 29 11:51:48.366396 ignition[681]: Stage: fetch-offline Jan 29 11:51:48.366433 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:51:48.366443 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:51:48.366556 ignition[681]: parsed url from cmdline: "" Jan 29 11:51:48.366574 ignition[681]: no config URL provided Jan 29 11:51:48.366581 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:51:48.366591 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:51:48.366617 ignition[681]: op(1): [started] loading QEMU firmware config module Jan 29 11:51:48.366623 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:51:48.406370 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:51:48.407552 ignition[681]: op(1): [finished] loading QEMU firmware config module Jan 29 11:51:48.407625 ignition[681]: QEMU firmware config was not found. Ignoring... Jan 29 11:51:48.413768 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:51:48.455489 ignition[681]: parsing config with SHA512: a50f891680486a031f72b848d0e4f5c4fbd1fdb6652429528e3231caa58e42a15bb3054376a4d0f3f470b2ed81ed681abae7b9087d76160ba3703b163f5962d8 Jan 29 11:51:48.457462 systemd-networkd[779]: lo: Link UP Jan 29 11:51:48.457473 systemd-networkd[779]: lo: Gained carrier Jan 29 11:51:48.460386 systemd-networkd[779]: Enumeration completed Jan 29 11:51:48.460945 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:51:48.460950 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:51:48.462394 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:51:48.468385 ignition[681]: fetch-offline: fetch-offline passed Jan 29 11:51:48.462896 systemd-networkd[779]: eth0: Link UP Jan 29 11:51:48.468552 ignition[681]: Ignition finished successfully Jan 29 11:51:48.462901 systemd-networkd[779]: eth0: Gained carrier Jan 29 11:51:48.462909 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:51:48.464011 systemd[1]: Reached target network.target - Network. Jan 29 11:51:48.467420 unknown[681]: fetched base config from "system" Jan 29 11:51:48.467431 unknown[681]: fetched user config from "qemu" Jan 29 11:51:48.471845 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:51:48.474049 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:51:48.478662 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:51:48.478877 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:51:48.506744 ignition[782]: Ignition 2.19.0 Jan 29 11:51:48.506759 ignition[782]: Stage: kargs Jan 29 11:51:48.506966 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:51:48.506988 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:51:48.510842 ignition[782]: kargs: kargs passed Jan 29 11:51:48.510909 ignition[782]: Ignition finished successfully Jan 29 11:51:48.515333 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:51:48.522769 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:51:48.540214 ignition[792]: Ignition 2.19.0 Jan 29 11:51:48.540229 ignition[792]: Stage: disks Jan 29 11:51:48.540462 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:51:48.540480 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:51:48.544842 ignition[792]: disks: disks passed Jan 29 11:51:48.544917 ignition[792]: Ignition finished successfully Jan 29 11:51:48.548744 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:51:48.550424 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:51:48.552702 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:51:48.554137 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:51:48.556612 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:51:48.557843 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:51:48.565744 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:51:48.579236 systemd-resolved[226]: Detected conflict on linux IN A 10.0.0.52 Jan 29 11:51:48.579252 systemd-resolved[226]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 29 11:51:48.585062 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:51:48.595490 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:51:48.608694 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:51:48.698599 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:51:48.699690 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:51:48.700520 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:51:48.710749 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:51:48.712111 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:51:48.714184 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:51:48.720620 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jan 29 11:51:48.714241 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:51:48.714273 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:51:48.729560 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:51:48.729605 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:51:48.729620 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:51:48.723116 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:51:48.728273 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:51:48.733589 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:51:48.736140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:51:48.772221 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:51:48.777329 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:51:48.781877 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:51:48.786402 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:51:48.877375 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:51:48.885698 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:51:48.889079 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:51:48.893591 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:51:48.917631 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:51:48.931771 ignition[924]: INFO : Ignition 2.19.0 Jan 29 11:51:48.931771 ignition[924]: INFO : Stage: mount Jan 29 11:51:48.933526 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:51:48.933526 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:51:48.933526 ignition[924]: INFO : mount: mount passed Jan 29 11:51:48.933526 ignition[924]: INFO : Ignition finished successfully Jan 29 11:51:48.936860 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:51:48.948656 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:51:49.182895 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:51:49.203877 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:51:49.212303 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Jan 29 11:51:49.212336 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:51:49.212348 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:51:49.213894 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:51:49.216588 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:51:49.217901 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:51:49.247338 ignition[954]: INFO : Ignition 2.19.0 Jan 29 11:51:49.247338 ignition[954]: INFO : Stage: files Jan 29 11:51:49.249615 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:51:49.249615 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:51:49.249615 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:51:49.253762 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:51:49.253762 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:51:49.253762 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:51:49.253762 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:51:49.253762 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:51:49.253021 unknown[954]: wrote ssh authorized keys file for user: core Jan 29 11:51:49.261859 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:51:49.261859 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:51:49.303191 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:51:49.535694 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:51:49.535694 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:51:49.539949 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:51:49.870218 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:51:50.430781 systemd-networkd[779]: eth0: Gained IPv6LL Jan 29 11:51:50.722907 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:51:50.722907 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 11:51:50.726826 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:51:50.729095 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:51:50.729095 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 11:51:50.729095 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 11:51:50.733420 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:51:50.735417 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:51:50.735417 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 11:51:50.738587 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:51:50.767290 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:51:50.773920 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:51:50.775713 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:51:50.775713 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:51:50.778593 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:51:50.780059 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:51:50.781851 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:51:50.791635 ignition[954]: INFO : files: files passed Jan 29 11:51:50.792392 ignition[954]: INFO : Ignition finished successfully Jan 29 11:51:50.795947 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:51:50.803844 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:51:50.805727 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:51:50.807540 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:51:50.807717 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:51:50.816897 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:51:50.820326 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:51:50.820326 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:51:50.823907 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:51:50.824071 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:51:50.826930 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:51:50.838757 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:51:50.864389 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:51:50.864545 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:51:50.868930 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:51:50.870999 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:51:50.873071 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:51:50.883723 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:51:50.899473 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:51:50.914725 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:51:50.926475 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:51:50.932330 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:51:50.934561 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:51:50.936644 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:51:50.936770 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:51:50.939115 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:51:50.940868 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:51:50.943076 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:51:50.945180 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:51:50.954967 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:51:50.957179 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:51:50.959359 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:51:50.961695 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:51:50.963703 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:51:50.965942 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:51:50.967739 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:51:50.967862 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:51:50.970191 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:51:50.971668 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:51:50.973770 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:51:50.973921 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:51:50.976162 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:51:50.976275 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:51:50.978775 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:51:50.978928 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:51:50.980760 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:51:50.982608 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:51:50.987728 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:51:50.989636 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:51:50.998724 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:51:51.000722 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:51:51.000847 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:51:51.003159 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:51:51.003253 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:51:51.005057 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:51:51.005182 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:51:51.007160 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:51:51.007267 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:51:51.022730 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:51:51.024390 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:51:51.025531 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:51:51.025662 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:51:51.027817 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:51:51.027955 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:51:51.033048 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:51:51.033161 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:51:51.041850 ignition[1009]: INFO : Ignition 2.19.0 Jan 29 11:51:51.041850 ignition[1009]: INFO : Stage: umount Jan 29 11:51:51.043772 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:51:51.043772 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:51:51.043772 ignition[1009]: INFO : umount: umount passed Jan 29 11:51:51.043772 ignition[1009]: INFO : Ignition finished successfully Jan 29 11:51:51.045660 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:51:51.045842 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:51:51.047379 systemd[1]: Stopped target network.target - Network. Jan 29 11:51:51.048859 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:51:51.048919 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:51:51.050848 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:51:51.050917 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:51:51.052828 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:51:51.052910 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:51:51.055048 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:51:51.055106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:51:51.057233 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:51:51.059340 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:51:51.062538 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:51:51.066631 systemd-networkd[779]: eth0: DHCPv6 lease lost Jan 29 11:51:51.069898 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:51:51.070040 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:51:51.071987 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:51:51.072036 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:51:51.081706 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:51:51.083091 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:51:51.083169 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:51:51.085800 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:51:51.088418 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:51:51.088594 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:51:51.096759 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:51:51.096894 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:51:51.099946 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:51:51.099999 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:51:51.103153 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:51:51.103220 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:51:51.107052 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:51:51.108213 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:51:51.111115 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:51:51.112144 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:51:51.115314 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:51:51.116457 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:51:51.118645 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:51:51.118694 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:51:51.121899 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:51:51.122846 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:51:51.125052 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:51:51.125111 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:51:51.128115 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:51:51.128175 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:51:51.147735 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:51:51.172253 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:51:51.172376 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:51:51.174667 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:51:51.174723 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:51:51.176938 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:51:51.176991 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:51:51.179403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:51:51.179456 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:51:51.182149 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:51:51.182273 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:51:51.381031 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:51:51.381193 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:51:51.382433 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:51:51.384970 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:51:51.385029 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:51:51.396873 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:51:51.404200 systemd[1]: Switching root. Jan 29 11:51:51.437156 systemd-journald[192]: Journal stopped Jan 29 11:51:52.946146 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 29 11:51:52.946213 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:51:52.946231 kernel: SELinux: policy capability open_perms=1 Jan 29 11:51:52.946242 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:51:52.946253 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:51:52.946273 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:51:52.946289 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:51:52.946300 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:51:52.946313 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:51:52.946325 kernel: audit: type=1403 audit(1738151511.993:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:51:52.946337 systemd[1]: Successfully loaded SELinux policy in 40.001ms. Jan 29 11:51:52.946361 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.013ms. Jan 29 11:51:52.946374 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:51:52.946387 systemd[1]: Detected virtualization kvm. Jan 29 11:51:52.946404 systemd[1]: Detected architecture x86-64. Jan 29 11:51:52.946417 systemd[1]: Detected first boot. Jan 29 11:51:52.946429 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:51:52.946441 zram_generator::config[1053]: No configuration found. Jan 29 11:51:52.946454 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:51:52.946466 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:51:52.946478 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:51:52.946491 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:51:52.946508 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:51:52.946521 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:51:52.946533 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:51:52.946545 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:51:52.946557 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:51:52.946582 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:51:52.946596 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:51:52.946620 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:51:52.946663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:51:52.946693 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:51:52.946706 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:51:52.946718 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:51:52.946731 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:51:52.946744 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:51:52.946765 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:51:52.946787 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:51:52.946803 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:51:52.946819 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:51:52.946843 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:51:52.946856 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:51:52.946869 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:51:52.946881 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:51:52.946893 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:51:52.946905 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:51:52.946917 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:51:52.946935 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:51:52.946947 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:51:52.946965 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:51:52.946980 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:51:52.946992 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:51:52.947004 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:51:52.947016 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:51:52.947028 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:51:52.947042 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:51:52.947059 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:51:52.947071 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:51:52.947083 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:51:52.947096 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:51:52.947108 systemd[1]: Reached target machines.target - Containers. Jan 29 11:51:52.947120 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:51:52.947132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:51:52.947145 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:51:52.947157 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:51:52.947174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:51:52.947186 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:51:52.947198 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:51:52.947216 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:51:52.947228 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:51:52.947241 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:51:52.947255 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:51:52.947267 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:51:52.947284 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:51:52.947296 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:51:52.947309 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:51:52.947321 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:51:52.947334 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:51:52.947346 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:51:52.947358 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:51:52.947370 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:51:52.947382 systemd[1]: Stopped verity-setup.service. Jan 29 11:51:52.947399 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:51:52.947411 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:51:52.947423 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:51:52.947435 kernel: fuse: init (API version 7.39) Jan 29 11:51:52.947446 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:51:52.947458 kernel: loop: module loaded Jan 29 11:51:52.947469 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:51:52.947481 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:51:52.947498 kernel: ACPI: bus type drm_connector registered Jan 29 11:51:52.947510 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:51:52.947524 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:51:52.947536 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:51:52.947548 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:51:52.947603 systemd-journald[1123]: Collecting audit messages is disabled. Jan 29 11:51:52.947633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:51:52.947646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:51:52.947658 systemd-journald[1123]: Journal started Jan 29 11:51:52.947679 systemd-journald[1123]: Runtime Journal (/run/log/journal/6259f1cd712645c8bf09cbcf0934bed2) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:51:52.675680 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:51:52.699530 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:51:52.700195 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:51:52.949233 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:51:52.950894 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:51:52.951144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:51:52.951827 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:51:52.952056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:51:52.952898 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:51:52.953113 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:51:52.953781 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:51:52.954003 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:51:52.954743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:51:52.955410 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:51:52.956251 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:51:52.957223 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:51:52.979017 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:51:53.001769 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:51:53.004673 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:51:53.006036 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:51:53.006073 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:51:53.008415 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:51:53.011824 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:51:53.014879 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:51:53.016432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:51:53.019744 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:51:53.022135 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:51:53.024249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:51:53.027807 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:51:53.029806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:51:53.033861 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:51:53.042141 systemd-journald[1123]: Time spent on flushing to /var/log/journal/6259f1cd712645c8bf09cbcf0934bed2 is 16.720ms for 953 entries. Jan 29 11:51:53.042141 systemd-journald[1123]: System Journal (/var/log/journal/6259f1cd712645c8bf09cbcf0934bed2) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:51:53.440065 systemd-journald[1123]: Received client request to flush runtime journal. Jan 29 11:51:53.440219 kernel: loop0: detected capacity change from 0 to 205544 Jan 29 11:51:53.440257 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:51:53.440287 kernel: loop1: detected capacity change from 0 to 142488 Jan 29 11:51:53.440313 kernel: loop2: detected capacity change from 0 to 140768 Jan 29 11:51:53.440338 kernel: loop3: detected capacity change from 0 to 205544 Jan 29 11:51:53.050353 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:51:53.053482 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:51:53.082704 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:51:53.084470 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:51:53.086079 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:51:53.087829 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:51:53.104323 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:51:53.119107 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 29 11:51:53.119127 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jan 29 11:51:53.126325 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:51:53.213825 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:51:53.224401 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:51:53.225953 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:51:53.230611 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:51:53.233433 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:51:53.239632 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:51:53.261922 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:51:53.269830 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:51:53.317962 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 29 11:51:53.317977 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 29 11:51:53.323857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:51:53.443442 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:51:53.447426 kernel: loop4: detected capacity change from 0 to 142488 Jan 29 11:51:53.477833 kernel: loop5: detected capacity change from 0 to 140768 Jan 29 11:51:53.520225 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:51:53.521078 (sd-merge)[1189]: Merged extensions into '/usr'. Jan 29 11:51:53.530266 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:51:53.530286 systemd[1]: Reloading... Jan 29 11:51:53.580665 zram_generator::config[1223]: No configuration found. Jan 29 11:51:53.740832 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:51:53.797481 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:51:53.797870 systemd[1]: Reloading finished in 266 ms. Jan 29 11:51:53.892876 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:51:53.895063 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:51:53.913943 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:51:53.918959 systemd[1]: Starting ensure-sysext.service... Jan 29 11:51:53.921531 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:51:53.923317 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:51:53.931085 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:51:53.931104 systemd[1]: Reloading... Jan 29 11:51:53.954169 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:51:53.955021 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:51:53.956169 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:51:53.956627 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 29 11:51:53.956809 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 29 11:51:53.960373 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:51:53.960468 systemd-tmpfiles[1259]: Skipping /boot Jan 29 11:51:54.015383 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:51:54.015604 systemd-tmpfiles[1259]: Skipping /boot Jan 29 11:51:54.031607 zram_generator::config[1284]: No configuration found. Jan 29 11:51:54.203315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:51:54.253822 systemd[1]: Reloading finished in 322 ms. Jan 29 11:51:54.274657 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:51:54.312198 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:51:54.350440 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:51:54.353793 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:51:54.357488 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:51:54.361702 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:51:54.366573 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:51:54.366761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:51:54.368174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:51:54.372739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:51:54.375117 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:51:54.376284 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:51:54.376393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:51:54.381039 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:51:54.390514 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:51:54.390755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:51:54.390984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:51:54.391146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:51:54.393452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:51:54.393764 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:51:54.396880 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:51:54.402156 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:51:54.403370 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:51:54.406981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:51:54.407181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:51:54.415919 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:51:54.422999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:51:54.423245 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:51:54.433815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:51:54.435600 augenrules[1353]: No rules Jan 29 11:51:54.438418 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:51:54.444671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:51:54.450029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:51:54.450317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:51:54.450514 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:51:54.452408 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:51:54.454471 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:51:54.456426 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:51:54.458415 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:51:54.469058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:51:54.469633 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:51:54.472356 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:51:54.472614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:51:54.474834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:51:54.475070 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:51:54.477472 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:51:54.479775 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:51:54.480020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:51:54.491109 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:51:54.491273 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:51:54.507800 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:51:54.510558 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:51:54.511885 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:51:54.512479 systemd[1]: Finished ensure-sysext.service. Jan 29 11:51:54.519758 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:51:54.529914 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:51:54.543435 systemd-udevd[1375]: Using default interface naming scheme 'v255'. Jan 29 11:51:54.547747 systemd-resolved[1336]: Positive Trust Anchors: Jan 29 11:51:54.547763 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:51:54.547797 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:51:54.551989 systemd-resolved[1336]: Defaulting to hostname 'linux'. Jan 29 11:51:54.554261 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:51:54.581428 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:51:54.590331 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:51:54.602795 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:51:54.633411 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:51:54.635037 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:51:54.639147 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:51:54.653325 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1387) Jan 29 11:51:54.671739 systemd-networkd[1384]: lo: Link UP Jan 29 11:51:54.671755 systemd-networkd[1384]: lo: Gained carrier Jan 29 11:51:54.673856 systemd-networkd[1384]: Enumeration completed Jan 29 11:51:54.673954 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:51:54.675028 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:51:54.675042 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:51:54.675325 systemd[1]: Reached target network.target - Network. Jan 29 11:51:54.677181 systemd-networkd[1384]: eth0: Link UP Jan 29 11:51:54.677194 systemd-networkd[1384]: eth0: Gained carrier Jan 29 11:51:54.677207 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:51:54.682759 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:51:54.686782 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:51:54.687483 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. Jan 29 11:51:55.326458 systemd-resolved[1336]: Clock change detected. Flushing caches. Jan 29 11:51:55.326953 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:51:55.327027 systemd-timesyncd[1378]: Initial clock synchronization to Wed 2025-01-29 11:51:55.326287 UTC. Jan 29 11:51:55.345485 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:51:55.348045 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 11:51:55.354242 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:51:55.360013 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:51:55.361009 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:51:55.370234 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:51:55.370608 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:51:55.372127 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:51:55.375836 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 29 11:51:55.385918 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:51:55.479037 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:51:55.485144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:51:55.555020 kernel: kvm_amd: TSC scaling supported Jan 29 11:51:55.555135 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:51:55.555191 kernel: kvm_amd: Nested Paging enabled Jan 29 11:51:55.556239 kernel: kvm_amd: LBR virtualization supported Jan 29 11:51:55.556281 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:51:55.556909 kernel: kvm_amd: Virtual GIF supported Jan 29 11:51:55.582933 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:51:55.625802 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:51:55.636992 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:51:55.638846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:51:55.649146 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:51:55.683576 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:51:55.685359 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:51:55.686665 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:51:55.688062 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:51:55.689520 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:51:55.691253 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:51:55.692739 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:51:55.694095 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:51:55.695462 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:51:55.695501 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:51:55.696541 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:51:55.698245 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:51:55.702031 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:51:55.717056 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:51:55.721102 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:51:55.723476 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:51:55.724983 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:51:55.726292 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:51:55.728120 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:51:55.728255 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:51:55.742055 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:51:55.745076 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:51:55.748775 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:51:55.751279 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:51:55.755193 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:51:55.756463 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:51:55.760014 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:51:55.765951 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:51:55.770984 jq[1432]: false Jan 29 11:51:55.771663 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:51:55.777022 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:51:55.782884 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:51:55.784488 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:51:55.785132 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:51:55.790099 extend-filesystems[1433]: Found loop3 Jan 29 11:51:55.790099 extend-filesystems[1433]: Found loop4 Jan 29 11:51:55.790099 extend-filesystems[1433]: Found loop5 Jan 29 11:51:55.790099 extend-filesystems[1433]: Found sr0 Jan 29 11:51:55.790099 extend-filesystems[1433]: Found vda Jan 29 11:51:55.790099 extend-filesystems[1433]: Found vda1 Jan 29 11:51:55.790099 extend-filesystems[1433]: Found vda2 Jan 29 11:51:55.790099 extend-filesystems[1433]: Found vda3 Jan 29 11:51:55.790099 extend-filesystems[1433]: Found usr Jan 29 11:51:55.790099 extend-filesystems[1433]: Found vda4 Jan 29 11:51:55.790099 extend-filesystems[1433]: Found vda6 Jan 29 11:51:55.790099 extend-filesystems[1433]: Found vda7 Jan 29 11:51:55.790099 extend-filesystems[1433]: Found vda9 Jan 29 11:51:55.790099 extend-filesystems[1433]: Checking size of /dev/vda9 Jan 29 11:51:55.837172 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:51:55.789992 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:51:55.837321 extend-filesystems[1433]: Resized partition /dev/vda9 Jan 29 11:51:55.806762 dbus-daemon[1431]: [system] SELinux support is enabled Jan 29 11:51:55.792325 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:51:55.840022 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:51:55.794957 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:51:55.847218 update_engine[1442]: I20250129 11:51:55.822743 1442 main.cc:92] Flatcar Update Engine starting Jan 29 11:51:55.847218 update_engine[1442]: I20250129 11:51:55.827107 1442 update_check_scheduler.cc:74] Next update check in 11m6s Jan 29 11:51:55.804590 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:51:55.848091 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1386) Jan 29 11:51:55.806955 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:51:55.807150 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:51:55.848429 jq[1447]: true Jan 29 11:51:55.833746 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:51:55.834182 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:51:55.837600 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:51:55.837834 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:51:55.854703 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:51:55.854754 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:51:55.856886 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:51:55.856904 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:51:55.858986 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:51:55.868005 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:51:55.869804 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:51:55.876364 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:51:55.896224 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:51:55.904218 jq[1459]: true Jan 29 11:51:55.904397 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:51:55.904397 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:51:55.904397 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:51:55.912896 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Jan 29 11:51:55.904564 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:51:55.904858 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:51:55.917193 tar[1453]: linux-amd64/helm Jan 29 11:51:55.943388 systemd-logind[1439]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:51:55.943414 systemd-logind[1439]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:51:55.944319 systemd-logind[1439]: New seat seat0. Jan 29 11:51:55.946753 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:51:55.975510 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:51:55.981494 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:51:55.984438 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:51:55.987216 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:51:56.028133 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:51:56.054152 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:51:56.061112 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:51:56.065664 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:39790.service - OpenSSH per-connection server daemon (10.0.0.1:39790). Jan 29 11:51:56.072101 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:51:56.072358 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:51:56.087278 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:51:56.106098 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:51:56.117260 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:51:56.121814 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:51:56.123448 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:51:56.134538 containerd[1454]: time="2025-01-29T11:51:56.134400124Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:51:56.143750 sshd[1508]: Accepted publickey for core from 10.0.0.1 port 39790 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:51:56.145230 sshd[1508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:51:56.154281 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:51:56.157474 containerd[1454]: time="2025-01-29T11:51:56.157143248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160070657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160138804Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160167398Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160398812Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160419040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160488861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160500973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160713983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160728821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160741454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:51:56.161710 containerd[1454]: time="2025-01-29T11:51:56.160751613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:51:56.162003 containerd[1454]: time="2025-01-29T11:51:56.160864385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:51:56.162003 containerd[1454]: time="2025-01-29T11:51:56.161156743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:51:56.162003 containerd[1454]: time="2025-01-29T11:51:56.161302146Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:51:56.162003 containerd[1454]: time="2025-01-29T11:51:56.161319979Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:51:56.162003 containerd[1454]: time="2025-01-29T11:51:56.161444543Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:51:56.162003 containerd[1454]: time="2025-01-29T11:51:56.161518772Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:51:56.166110 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:51:56.169308 systemd-logind[1439]: New session 1 of user core. Jan 29 11:51:56.218795 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:51:56.239162 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:51:56.267068 (systemd)[1524]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:51:56.323116 containerd[1454]: time="2025-01-29T11:51:56.322947350Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:51:56.323116 containerd[1454]: time="2025-01-29T11:51:56.323083335Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:51:56.323116 containerd[1454]: time="2025-01-29T11:51:56.323111809Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:51:56.323261 containerd[1454]: time="2025-01-29T11:51:56.323130414Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:51:56.323261 containerd[1454]: time="2025-01-29T11:51:56.323145913Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:51:56.323463 containerd[1454]: time="2025-01-29T11:51:56.323410018Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:51:56.323703 containerd[1454]: time="2025-01-29T11:51:56.323648866Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:51:56.323834 containerd[1454]: time="2025-01-29T11:51:56.323774030Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:51:56.323834 containerd[1454]: time="2025-01-29T11:51:56.323814777Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:51:56.323834 containerd[1454]: time="2025-01-29T11:51:56.323829053Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:51:56.323949 containerd[1454]: time="2025-01-29T11:51:56.323842148Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:51:56.323949 containerd[1454]: time="2025-01-29T11:51:56.323855172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:51:56.323949 containerd[1454]: time="2025-01-29T11:51:56.323867195Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:51:56.323949 containerd[1454]: time="2025-01-29T11:51:56.323881061Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:51:56.323949 containerd[1454]: time="2025-01-29T11:51:56.323896390Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:51:56.323949 containerd[1454]: time="2025-01-29T11:51:56.323924072Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:51:56.323949 containerd[1454]: time="2025-01-29T11:51:56.323936154Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:51:56.323949 containerd[1454]: time="2025-01-29T11:51:56.323951082Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:51:56.324089 containerd[1454]: time="2025-01-29T11:51:56.323978894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324089 containerd[1454]: time="2025-01-29T11:51:56.323999223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324089 containerd[1454]: time="2025-01-29T11:51:56.324015834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324089 containerd[1454]: time="2025-01-29T11:51:56.324033146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324089 containerd[1454]: time="2025-01-29T11:51:56.324046611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324089 containerd[1454]: time="2025-01-29T11:51:56.324059345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324089 containerd[1454]: time="2025-01-29T11:51:56.324071147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324089 containerd[1454]: time="2025-01-29T11:51:56.324083511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324096254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324110571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324123075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324134777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324146499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324163450Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324187776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324204598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324215688Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324264901Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324287503Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:51:56.324297 containerd[1454]: time="2025-01-29T11:51:56.324303313Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:51:56.324510 containerd[1454]: time="2025-01-29T11:51:56.324319593Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:51:56.324510 containerd[1454]: time="2025-01-29T11:51:56.324333028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324510 containerd[1454]: time="2025-01-29T11:51:56.324345883Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:51:56.324510 containerd[1454]: time="2025-01-29T11:51:56.324356372Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:51:56.324510 containerd[1454]: time="2025-01-29T11:51:56.324367523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:51:56.324686 containerd[1454]: time="2025-01-29T11:51:56.324620247Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:51:56.324686 containerd[1454]: time="2025-01-29T11:51:56.324679187Z" level=info msg="Connect containerd service" Jan 29 11:51:56.324918 containerd[1454]: time="2025-01-29T11:51:56.324721316Z" level=info msg="using legacy CRI server" Jan 29 11:51:56.324918 containerd[1454]: time="2025-01-29T11:51:56.324728881Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:51:56.324918 containerd[1454]: time="2025-01-29T11:51:56.324865667Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:51:56.325771 containerd[1454]: time="2025-01-29T11:51:56.325714919Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:51:56.326041 containerd[1454]: time="2025-01-29T11:51:56.325933279Z" level=info msg="Start subscribing containerd event" Jan 29 11:51:56.326230 containerd[1454]: time="2025-01-29T11:51:56.326108057Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:51:56.326230 containerd[1454]: time="2025-01-29T11:51:56.326184149Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:51:56.326334 containerd[1454]: time="2025-01-29T11:51:56.326319203Z" level=info msg="Start recovering state" Jan 29 11:51:56.326712 containerd[1454]: time="2025-01-29T11:51:56.326595280Z" level=info msg="Start event monitor" Jan 29 11:51:56.326712 containerd[1454]: time="2025-01-29T11:51:56.326652117Z" level=info msg="Start snapshots syncer" Jan 29 11:51:56.326712 containerd[1454]: time="2025-01-29T11:51:56.326668217Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:51:56.326712 containerd[1454]: time="2025-01-29T11:51:56.326680790Z" level=info msg="Start streaming server" Jan 29 11:51:56.329006 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:51:56.329637 containerd[1454]: time="2025-01-29T11:51:56.329056204Z" level=info msg="containerd successfully booted in 0.195788s" Jan 29 11:51:56.334262 tar[1453]: linux-amd64/LICENSE Jan 29 11:51:56.334405 tar[1453]: linux-amd64/README.md Jan 29 11:51:56.351392 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:51:56.384213 systemd[1524]: Queued start job for default target default.target. Jan 29 11:51:56.395175 systemd[1524]: Created slice app.slice - User Application Slice. Jan 29 11:51:56.395203 systemd[1524]: Reached target paths.target - Paths. Jan 29 11:51:56.395218 systemd[1524]: Reached target timers.target - Timers. Jan 29 11:51:56.396911 systemd[1524]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:51:56.409013 systemd[1524]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:51:56.409155 systemd[1524]: Reached target sockets.target - Sockets. Jan 29 11:51:56.409175 systemd[1524]: Reached target basic.target - Basic System. Jan 29 11:51:56.409216 systemd[1524]: Reached target default.target - Main User Target. Jan 29 11:51:56.409252 systemd[1524]: Startup finished in 134ms. Jan 29 11:51:56.409640 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:51:56.412298 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:51:56.480797 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:39792.service - OpenSSH per-connection server daemon (10.0.0.1:39792). Jan 29 11:51:56.509950 systemd-networkd[1384]: eth0: Gained IPv6LL Jan 29 11:51:56.514348 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:51:56.516370 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:51:56.520406 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 39792 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:51:56.522567 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:51:56.528228 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:51:56.531710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:51:56.534523 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:51:56.548334 systemd-logind[1439]: New session 2 of user core. Jan 29 11:51:56.550922 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:51:56.561609 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:51:56.561962 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:51:56.563686 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:51:56.565593 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:51:56.610986 sshd[1538]: pam_unix(sshd:session): session closed for user core Jan 29 11:51:56.621969 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:39792.service: Deactivated successfully. Jan 29 11:51:56.623845 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:51:56.625581 systemd-logind[1439]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:51:56.635043 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:39800.service - OpenSSH per-connection server daemon (10.0.0.1:39800). Jan 29 11:51:56.637627 systemd-logind[1439]: Removed session 2. Jan 29 11:51:56.668521 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 39800 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:51:56.670309 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:51:56.674481 systemd-logind[1439]: New session 3 of user core. Jan 29 11:51:56.687966 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:51:56.829360 sshd[1562]: pam_unix(sshd:session): session closed for user core Jan 29 11:51:56.833968 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:39800.service: Deactivated successfully. Jan 29 11:51:56.836010 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:51:56.836635 systemd-logind[1439]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:51:56.837601 systemd-logind[1439]: Removed session 3. Jan 29 11:51:57.707570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:51:57.709549 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:51:57.710993 systemd[1]: Startup finished in 825ms (kernel) + 6.278s (initrd) + 5.118s (userspace) = 12.223s. Jan 29 11:51:57.733289 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:51:58.545046 kubelet[1573]: E0129 11:51:58.544978 1573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:51:58.549378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:51:58.549643 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:51:58.550110 systemd[1]: kubelet.service: Consumed 1.906s CPU time. Jan 29 11:52:06.841255 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:42246.service - OpenSSH per-connection server daemon (10.0.0.1:42246). Jan 29 11:52:06.876873 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 42246 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:52:06.878683 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:06.883232 systemd-logind[1439]: New session 4 of user core. Jan 29 11:52:06.898953 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:52:06.953433 sshd[1586]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:06.972694 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:42246.service: Deactivated successfully. Jan 29 11:52:06.974848 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:52:06.976604 systemd-logind[1439]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:52:06.987207 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:42258.service - OpenSSH per-connection server daemon (10.0.0.1:42258). Jan 29 11:52:06.988238 systemd-logind[1439]: Removed session 4. Jan 29 11:52:07.015109 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 42258 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:52:07.016513 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:07.020513 systemd-logind[1439]: New session 5 of user core. Jan 29 11:52:07.037952 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:52:07.088401 sshd[1593]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:07.099775 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:42258.service: Deactivated successfully. Jan 29 11:52:07.101796 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:52:07.103532 systemd-logind[1439]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:52:07.105134 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:42264.service - OpenSSH per-connection server daemon (10.0.0.1:42264). Jan 29 11:52:07.106013 systemd-logind[1439]: Removed session 5. Jan 29 11:52:07.146560 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 42264 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:52:07.148183 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:07.152363 systemd-logind[1439]: New session 6 of user core. Jan 29 11:52:07.169914 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:52:07.225928 sshd[1600]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:07.248705 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:42264.service: Deactivated successfully. Jan 29 11:52:07.250760 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:52:07.252625 systemd-logind[1439]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:52:07.270030 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:42268.service - OpenSSH per-connection server daemon (10.0.0.1:42268). Jan 29 11:52:07.271005 systemd-logind[1439]: Removed session 6. Jan 29 11:52:07.299014 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 42268 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:52:07.300998 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:07.305441 systemd-logind[1439]: New session 7 of user core. Jan 29 11:52:07.319015 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:52:07.377215 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:52:07.377577 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:52:07.395492 sudo[1610]: pam_unix(sudo:session): session closed for user root Jan 29 11:52:07.397491 sshd[1607]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:07.408695 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:42268.service: Deactivated successfully. Jan 29 11:52:07.410541 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:52:07.412177 systemd-logind[1439]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:52:07.422392 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:42282.service - OpenSSH per-connection server daemon (10.0.0.1:42282). Jan 29 11:52:07.423640 systemd-logind[1439]: Removed session 7. Jan 29 11:52:07.455428 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 42282 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:52:07.457239 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:07.461380 systemd-logind[1439]: New session 8 of user core. Jan 29 11:52:07.470927 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:52:07.528598 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:52:07.529102 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:52:07.533820 sudo[1619]: pam_unix(sudo:session): session closed for user root Jan 29 11:52:07.541375 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:52:07.541755 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:52:07.573104 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:52:07.574852 auditctl[1622]: No rules Jan 29 11:52:07.576295 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:52:07.576592 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:52:07.578648 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:52:07.611339 augenrules[1640]: No rules Jan 29 11:52:07.613412 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:52:07.614800 sudo[1618]: pam_unix(sudo:session): session closed for user root Jan 29 11:52:07.616992 sshd[1615]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:07.625339 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:42282.service: Deactivated successfully. Jan 29 11:52:07.627847 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:52:07.629604 systemd-logind[1439]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:52:07.648195 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:42294.service - OpenSSH per-connection server daemon (10.0.0.1:42294). Jan 29 11:52:07.649305 systemd-logind[1439]: Removed session 8. Jan 29 11:52:07.680313 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 42294 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:52:07.682088 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:52:07.686776 systemd-logind[1439]: New session 9 of user core. Jan 29 11:52:07.703941 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:52:07.758090 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:52:07.758431 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:52:08.070097 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:52:08.070251 (dockerd)[1669]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:52:08.563319 dockerd[1669]: time="2025-01-29T11:52:08.563153203Z" level=info msg="Starting up" Jan 29 11:52:08.567501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:52:08.572960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:52:08.935663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:52:08.940886 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:52:09.050345 kubelet[1701]: E0129 11:52:09.050186 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:52:09.058564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:52:09.058867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:52:09.435927 dockerd[1669]: time="2025-01-29T11:52:09.435703455Z" level=info msg="Loading containers: start." Jan 29 11:52:09.556821 kernel: Initializing XFRM netlink socket Jan 29 11:52:09.638083 systemd-networkd[1384]: docker0: Link UP Jan 29 11:52:09.660576 dockerd[1669]: time="2025-01-29T11:52:09.660529834Z" level=info msg="Loading containers: done." Jan 29 11:52:09.676734 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3309127272-merged.mount: Deactivated successfully. Jan 29 11:52:09.679652 dockerd[1669]: time="2025-01-29T11:52:09.679603287Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:52:09.679732 dockerd[1669]: time="2025-01-29T11:52:09.679718292Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:52:09.679892 dockerd[1669]: time="2025-01-29T11:52:09.679867442Z" level=info msg="Daemon has completed initialization" Jan 29 11:52:09.723571 dockerd[1669]: time="2025-01-29T11:52:09.723391479Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:52:09.723648 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:52:10.625844 containerd[1454]: time="2025-01-29T11:52:10.625758898Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:52:11.499025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2610575229.mount: Deactivated successfully. Jan 29 11:52:13.054923 containerd[1454]: time="2025-01-29T11:52:13.054835737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:13.056614 containerd[1454]: time="2025-01-29T11:52:13.056565770Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 29 11:52:13.064729 containerd[1454]: time="2025-01-29T11:52:13.064656661Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:13.106350 containerd[1454]: time="2025-01-29T11:52:13.106255268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:13.107931 containerd[1454]: time="2025-01-29T11:52:13.107863674Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.482036388s" Jan 29 11:52:13.107931 containerd[1454]: time="2025-01-29T11:52:13.107903489Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 29 11:52:13.111187 containerd[1454]: time="2025-01-29T11:52:13.111146579Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:52:14.591072 containerd[1454]: time="2025-01-29T11:52:14.591001297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:14.591932 containerd[1454]: time="2025-01-29T11:52:14.591894561Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 29 11:52:14.593264 containerd[1454]: time="2025-01-29T11:52:14.593218093Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:14.596892 containerd[1454]: time="2025-01-29T11:52:14.596818975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:14.598431 containerd[1454]: time="2025-01-29T11:52:14.598378058Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.48719459s" Jan 29 11:52:14.598431 containerd[1454]: time="2025-01-29T11:52:14.598428453Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 29 11:52:14.599175 containerd[1454]: time="2025-01-29T11:52:14.599086817Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:52:16.307814 containerd[1454]: time="2025-01-29T11:52:16.307741534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:16.308677 containerd[1454]: time="2025-01-29T11:52:16.308603831Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 29 11:52:16.309820 containerd[1454]: time="2025-01-29T11:52:16.309760750Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:16.312375 containerd[1454]: time="2025-01-29T11:52:16.312328184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:16.313275 containerd[1454]: time="2025-01-29T11:52:16.313233672Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.714110627s" Jan 29 11:52:16.313343 containerd[1454]: time="2025-01-29T11:52:16.313271683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 29 11:52:16.313753 containerd[1454]: time="2025-01-29T11:52:16.313725965Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:52:17.592120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198141538.mount: Deactivated successfully. Jan 29 11:52:18.718119 containerd[1454]: time="2025-01-29T11:52:18.718035966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:18.725464 containerd[1454]: time="2025-01-29T11:52:18.725408609Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:52:18.726947 containerd[1454]: time="2025-01-29T11:52:18.726904193Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:18.729150 containerd[1454]: time="2025-01-29T11:52:18.729108927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:18.729639 containerd[1454]: time="2025-01-29T11:52:18.729607342Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.415759829s" Jan 29 11:52:18.729639 containerd[1454]: time="2025-01-29T11:52:18.729638731Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:52:18.730226 containerd[1454]: time="2025-01-29T11:52:18.730184504Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:52:19.176844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:52:19.182031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:52:19.189449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3428018829.mount: Deactivated successfully. Jan 29 11:52:19.533891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:52:19.538326 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:52:19.844862 kubelet[1917]: E0129 11:52:19.843590 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:52:19.848098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:52:19.848344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:52:21.700055 containerd[1454]: time="2025-01-29T11:52:21.699946387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:21.701797 containerd[1454]: time="2025-01-29T11:52:21.701726214Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:52:21.703409 containerd[1454]: time="2025-01-29T11:52:21.703360268Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:21.708323 containerd[1454]: time="2025-01-29T11:52:21.708249225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:21.709483 containerd[1454]: time="2025-01-29T11:52:21.709434878Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.979212032s" Jan 29 11:52:21.709483 containerd[1454]: time="2025-01-29T11:52:21.709474963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:52:21.710132 containerd[1454]: time="2025-01-29T11:52:21.710096057Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:52:22.272771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2219894837.mount: Deactivated successfully. Jan 29 11:52:22.280496 containerd[1454]: time="2025-01-29T11:52:22.280437658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:22.281335 containerd[1454]: time="2025-01-29T11:52:22.281270680Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:52:22.282513 containerd[1454]: time="2025-01-29T11:52:22.282481310Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:22.285041 containerd[1454]: time="2025-01-29T11:52:22.285000303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:22.285963 containerd[1454]: time="2025-01-29T11:52:22.285903186Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 575.774548ms" Jan 29 11:52:22.285963 containerd[1454]: time="2025-01-29T11:52:22.285939664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:52:22.286526 containerd[1454]: time="2025-01-29T11:52:22.286461853Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:52:23.220903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264949941.mount: Deactivated successfully. Jan 29 11:52:25.465005 containerd[1454]: time="2025-01-29T11:52:25.464924219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:25.465988 containerd[1454]: time="2025-01-29T11:52:25.465932139Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 29 11:52:25.467355 containerd[1454]: time="2025-01-29T11:52:25.467300715Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:25.470569 containerd[1454]: time="2025-01-29T11:52:25.470529098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:25.471841 containerd[1454]: time="2025-01-29T11:52:25.471803718Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.185291821s" Jan 29 11:52:25.471905 containerd[1454]: time="2025-01-29T11:52:25.471840898Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 29 11:52:27.991351 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:52:28.004172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:52:28.033076 systemd[1]: Reloading requested from client PID 2054 ('systemctl') (unit session-9.scope)... Jan 29 11:52:28.033106 systemd[1]: Reloading... Jan 29 11:52:28.121838 zram_generator::config[2096]: No configuration found. Jan 29 11:52:28.348208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:52:28.432989 systemd[1]: Reloading finished in 399 ms. Jan 29 11:52:28.490282 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:52:28.490383 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:52:28.490698 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:52:28.493939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:52:28.671329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:52:28.685361 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:52:28.752877 kubelet[2142]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:52:28.752877 kubelet[2142]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:52:28.752877 kubelet[2142]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:52:28.754207 kubelet[2142]: I0129 11:52:28.754150 2142 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:52:29.041721 kubelet[2142]: I0129 11:52:29.041591 2142 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:52:29.041721 kubelet[2142]: I0129 11:52:29.041630 2142 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:52:29.041938 kubelet[2142]: I0129 11:52:29.041902 2142 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:52:29.066820 kubelet[2142]: I0129 11:52:29.066533 2142 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:52:29.069956 kubelet[2142]: E0129 11:52:29.068143 2142 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:29.186705 kubelet[2142]: E0129 11:52:29.186659 2142 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:52:29.186705 kubelet[2142]: I0129 11:52:29.186695 2142 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:52:29.194036 kubelet[2142]: I0129 11:52:29.193982 2142 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:52:29.194246 kubelet[2142]: I0129 11:52:29.194224 2142 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:52:29.194493 kubelet[2142]: I0129 11:52:29.194439 2142 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:52:29.194733 kubelet[2142]: I0129 11:52:29.194488 2142 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:52:29.194867 kubelet[2142]: I0129 11:52:29.194756 2142 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:52:29.194867 kubelet[2142]: I0129 11:52:29.194767 2142 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:52:29.194974 kubelet[2142]: I0129 11:52:29.194957 2142 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:52:29.197104 kubelet[2142]: I0129 11:52:29.197056 2142 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:52:29.197104 kubelet[2142]: I0129 11:52:29.197090 2142 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:52:29.197315 kubelet[2142]: I0129 11:52:29.197154 2142 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:52:29.197315 kubelet[2142]: I0129 11:52:29.197183 2142 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:52:29.201867 kubelet[2142]: W0129 11:52:29.201741 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:29.201867 kubelet[2142]: E0129 11:52:29.201827 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:29.202901 kubelet[2142]: I0129 11:52:29.202830 2142 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:52:29.203906 kubelet[2142]: W0129 11:52:29.203813 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:29.203906 kubelet[2142]: E0129 11:52:29.203882 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:29.204810 kubelet[2142]: I0129 11:52:29.204764 2142 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:52:29.204917 kubelet[2142]: W0129 11:52:29.204895 2142 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:52:29.205721 kubelet[2142]: I0129 11:52:29.205699 2142 server.go:1269] "Started kubelet" Jan 29 11:52:29.207620 kubelet[2142]: I0129 11:52:29.207549 2142 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:52:29.208049 kubelet[2142]: I0129 11:52:29.208025 2142 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:52:29.208130 kubelet[2142]: I0129 11:52:29.208104 2142 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:52:29.209107 kubelet[2142]: I0129 11:52:29.208464 2142 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:52:29.209222 kubelet[2142]: I0129 11:52:29.209120 2142 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:52:29.210113 kubelet[2142]: I0129 11:52:29.210083 2142 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:52:29.211336 kubelet[2142]: I0129 11:52:29.211302 2142 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:52:29.214819 kubelet[2142]: E0129 11:52:29.214333 2142 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:52:29.214819 kubelet[2142]: I0129 11:52:29.214606 2142 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:52:29.214819 kubelet[2142]: I0129 11:52:29.214716 2142 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:52:29.215019 kubelet[2142]: I0129 11:52:29.214984 2142 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:52:29.215128 kubelet[2142]: I0129 11:52:29.215102 2142 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:52:29.217477 kubelet[2142]: W0129 11:52:29.217420 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:29.217522 kubelet[2142]: E0129 11:52:29.217484 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:29.217581 kubelet[2142]: E0129 11:52:29.217546 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" Jan 29 11:52:29.221560 kubelet[2142]: E0129 11:52:29.221527 2142 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:52:29.221823 kubelet[2142]: I0129 11:52:29.221766 2142 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:52:29.224082 kubelet[2142]: E0129 11:52:29.222034 2142 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f279ff832c53e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:52:29.20566099 +0000 UTC m=+0.513390287,LastTimestamp:2025-01-29 11:52:29.20566099 +0000 UTC m=+0.513390287,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:52:29.232806 kubelet[2142]: I0129 11:52:29.232745 2142 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:52:29.235913 kubelet[2142]: I0129 11:52:29.235273 2142 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:52:29.235913 kubelet[2142]: I0129 11:52:29.235331 2142 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:52:29.235913 kubelet[2142]: I0129 11:52:29.235360 2142 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:52:29.235913 kubelet[2142]: E0129 11:52:29.235412 2142 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:52:29.237077 kubelet[2142]: W0129 11:52:29.236535 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:29.237077 kubelet[2142]: E0129 11:52:29.236570 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:29.261516 kubelet[2142]: I0129 11:52:29.261482 2142 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:52:29.261516 kubelet[2142]: I0129 11:52:29.261502 2142 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:52:29.261516 kubelet[2142]: I0129 11:52:29.261525 2142 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:52:29.314925 kubelet[2142]: E0129 11:52:29.314751 2142 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:52:29.336272 kubelet[2142]: E0129 11:52:29.336204 2142 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:52:29.415613 kubelet[2142]: E0129 11:52:29.415507 2142 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:52:29.418384 kubelet[2142]: E0129 11:52:29.418342 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" Jan 29 11:52:29.515749 kubelet[2142]: E0129 11:52:29.515663 2142 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:52:29.536922 kubelet[2142]: E0129 11:52:29.536853 2142 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:52:29.616778 kubelet[2142]: E0129 11:52:29.616600 2142 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:52:29.716950 kubelet[2142]: E0129 11:52:29.716844 2142 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:52:29.817449 kubelet[2142]: E0129 11:52:29.817380 2142 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:52:29.818986 kubelet[2142]: E0129 11:52:29.818923 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" Jan 29 11:52:29.917968 kubelet[2142]: E0129 11:52:29.917680 2142 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:52:29.938038 kubelet[2142]: E0129 11:52:29.937960 2142 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:52:29.952110 kubelet[2142]: I0129 11:52:29.952042 2142 policy_none.go:49] "None policy: Start" Jan 29 11:52:29.953390 kubelet[2142]: I0129 11:52:29.953351 2142 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:52:29.953390 kubelet[2142]: I0129 11:52:29.953391 2142 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:52:29.967342 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:52:29.990861 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:52:29.994912 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:52:30.006409 kubelet[2142]: I0129 11:52:30.006338 2142 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:52:30.006719 kubelet[2142]: I0129 11:52:30.006689 2142 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:52:30.006827 kubelet[2142]: I0129 11:52:30.006706 2142 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:52:30.007007 kubelet[2142]: I0129 11:52:30.006987 2142 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:52:30.008096 kubelet[2142]: E0129 11:52:30.008054 2142 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:52:30.035720 kubelet[2142]: W0129 11:52:30.035597 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:30.035720 kubelet[2142]: E0129 11:52:30.035721 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:30.109243 kubelet[2142]: I0129 11:52:30.109167 2142 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:52:30.109652 kubelet[2142]: E0129 11:52:30.109626 2142 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jan 29 11:52:30.312035 kubelet[2142]: I0129 11:52:30.311892 2142 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:52:30.312429 kubelet[2142]: E0129 11:52:30.312356 2142 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jan 29 11:52:30.422241 kubelet[2142]: W0129 11:52:30.422153 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:30.422241 kubelet[2142]: E0129 11:52:30.422234 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:30.620496 kubelet[2142]: E0129 11:52:30.620290 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="1.6s" Jan 29 11:52:30.714611 kubelet[2142]: W0129 11:52:30.714472 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:30.714611 kubelet[2142]: I0129 11:52:30.714593 2142 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:52:30.714611 kubelet[2142]: E0129 11:52:30.714600 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:30.715098 kubelet[2142]: E0129 11:52:30.715052 2142 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jan 29 11:52:30.735017 kubelet[2142]: W0129 11:52:30.734971 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:30.735133 kubelet[2142]: E0129 11:52:30.735024 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:30.747661 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 29 11:52:30.764533 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 29 11:52:30.778772 systemd[1]: Created slice kubepods-burstable-podde91e62537ce1fc82e80d539be4f9d43.slice - libcontainer container kubepods-burstable-podde91e62537ce1fc82e80d539be4f9d43.slice. Jan 29 11:52:30.822045 kubelet[2142]: I0129 11:52:30.821974 2142 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de91e62537ce1fc82e80d539be4f9d43-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"de91e62537ce1fc82e80d539be4f9d43\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:52:30.822045 kubelet[2142]: I0129 11:52:30.822051 2142 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:52:30.822561 kubelet[2142]: I0129 11:52:30.822068 2142 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:52:30.822561 kubelet[2142]: I0129 11:52:30.822094 2142 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:52:30.822561 kubelet[2142]: I0129 11:52:30.822117 2142 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:52:30.822561 kubelet[2142]: I0129 11:52:30.822156 2142 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:52:30.822561 kubelet[2142]: I0129 11:52:30.822180 2142 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de91e62537ce1fc82e80d539be4f9d43-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"de91e62537ce1fc82e80d539be4f9d43\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:52:30.822682 kubelet[2142]: I0129 11:52:30.822197 2142 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:52:30.822682 kubelet[2142]: I0129 11:52:30.822211 2142 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de91e62537ce1fc82e80d539be4f9d43-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"de91e62537ce1fc82e80d539be4f9d43\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:52:31.061924 kubelet[2142]: E0129 11:52:31.061740 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:31.062657 containerd[1454]: time="2025-01-29T11:52:31.062598402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 29 11:52:31.076450 kubelet[2142]: E0129 11:52:31.076391 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:31.076946 containerd[1454]: time="2025-01-29T11:52:31.076920908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 29 11:52:31.081406 kubelet[2142]: E0129 11:52:31.081364 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:31.081835 containerd[1454]: time="2025-01-29T11:52:31.081802855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:de91e62537ce1fc82e80d539be4f9d43,Namespace:kube-system,Attempt:0,}" Jan 29 11:52:31.215478 kubelet[2142]: E0129 11:52:31.215420 2142 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:31.517639 kubelet[2142]: I0129 11:52:31.517479 2142 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:52:31.517966 kubelet[2142]: E0129 11:52:31.517927 2142 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jan 29 11:52:32.221938 kubelet[2142]: E0129 11:52:32.221860 2142 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="3.2s" Jan 29 11:52:32.322114 kubelet[2142]: W0129 11:52:32.321978 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:32.322114 kubelet[2142]: E0129 11:52:32.322030 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:32.511670 kubelet[2142]: W0129 11:52:32.511499 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:32.511670 kubelet[2142]: E0129 11:52:32.511563 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:32.900073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount132558857.mount: Deactivated successfully. Jan 29 11:52:32.906314 containerd[1454]: time="2025-01-29T11:52:32.906239651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:52:32.908564 containerd[1454]: time="2025-01-29T11:52:32.908513910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:52:32.909547 containerd[1454]: time="2025-01-29T11:52:32.909495676Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:52:32.912814 containerd[1454]: time="2025-01-29T11:52:32.910963413Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:52:32.913507 containerd[1454]: time="2025-01-29T11:52:32.913459435Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:52:32.915379 containerd[1454]: time="2025-01-29T11:52:32.915329360Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:52:32.916126 containerd[1454]: time="2025-01-29T11:52:32.916070006Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:52:32.918320 containerd[1454]: time="2025-01-29T11:52:32.918274892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:52:32.921262 containerd[1454]: time="2025-01-29T11:52:32.921197529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.858520396s" Jan 29 11:52:32.922032 containerd[1454]: time="2025-01-29T11:52:32.921998039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.840152755s" Jan 29 11:52:32.925267 containerd[1454]: time="2025-01-29T11:52:32.925218006Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.848234687s" Jan 29 11:52:33.108409 containerd[1454]: time="2025-01-29T11:52:33.107779414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:33.108409 containerd[1454]: time="2025-01-29T11:52:33.107895085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:33.108409 containerd[1454]: time="2025-01-29T11:52:33.107915093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:33.108409 containerd[1454]: time="2025-01-29T11:52:33.108018802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:33.112230 containerd[1454]: time="2025-01-29T11:52:33.111768892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:33.112230 containerd[1454]: time="2025-01-29T11:52:33.111894272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:33.112230 containerd[1454]: time="2025-01-29T11:52:33.111908549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:33.112230 containerd[1454]: time="2025-01-29T11:52:33.112015253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:33.115733 containerd[1454]: time="2025-01-29T11:52:33.115594227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:33.116844 containerd[1454]: time="2025-01-29T11:52:33.115941130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:33.116844 containerd[1454]: time="2025-01-29T11:52:33.116001956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:33.116844 containerd[1454]: time="2025-01-29T11:52:33.116171870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:33.120076 kubelet[2142]: I0129 11:52:33.120029 2142 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:52:33.120671 kubelet[2142]: E0129 11:52:33.120628 2142 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jan 29 11:52:33.167075 systemd[1]: Started cri-containerd-a8323384863e4fe0cd1fe74aba7c5c03ff8906a792c76236f2cbc48c2d9bc819.scope - libcontainer container a8323384863e4fe0cd1fe74aba7c5c03ff8906a792c76236f2cbc48c2d9bc819. Jan 29 11:52:33.171064 kubelet[2142]: W0129 11:52:33.171014 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:33.171163 kubelet[2142]: E0129 11:52:33.171066 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:33.173694 systemd[1]: Started cri-containerd-b9d9be315ec8672612aa2d8ddafdf7b38faa1c32f4ea9b43e7519fcfbd368807.scope - libcontainer container b9d9be315ec8672612aa2d8ddafdf7b38faa1c32f4ea9b43e7519fcfbd368807. Jan 29 11:52:33.178547 systemd[1]: Started cri-containerd-7a2dea66ceb604dfb63a68ace49758e784005fb0d1918e96044ec304e97f47b5.scope - libcontainer container 7a2dea66ceb604dfb63a68ace49758e784005fb0d1918e96044ec304e97f47b5. Jan 29 11:52:33.214907 kubelet[2142]: W0129 11:52:33.214614 2142 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jan 29 11:52:33.214907 kubelet[2142]: E0129 11:52:33.214679 2142 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:52:33.242288 containerd[1454]: time="2025-01-29T11:52:33.242213436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9d9be315ec8672612aa2d8ddafdf7b38faa1c32f4ea9b43e7519fcfbd368807\"" Jan 29 11:52:33.243996 kubelet[2142]: E0129 11:52:33.243947 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:33.248265 containerd[1454]: time="2025-01-29T11:52:33.248169309Z" level=info msg="CreateContainer within sandbox \"b9d9be315ec8672612aa2d8ddafdf7b38faa1c32f4ea9b43e7519fcfbd368807\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:52:33.254958 containerd[1454]: time="2025-01-29T11:52:33.254795623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a2dea66ceb604dfb63a68ace49758e784005fb0d1918e96044ec304e97f47b5\"" Jan 29 11:52:33.255070 containerd[1454]: time="2025-01-29T11:52:33.254680773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:de91e62537ce1fc82e80d539be4f9d43,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8323384863e4fe0cd1fe74aba7c5c03ff8906a792c76236f2cbc48c2d9bc819\"" Jan 29 11:52:33.256022 kubelet[2142]: E0129 11:52:33.255979 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:33.256022 kubelet[2142]: E0129 11:52:33.256005 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:33.259203 containerd[1454]: time="2025-01-29T11:52:33.259148515Z" level=info msg="CreateContainer within sandbox \"7a2dea66ceb604dfb63a68ace49758e784005fb0d1918e96044ec304e97f47b5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:52:33.259568 containerd[1454]: time="2025-01-29T11:52:33.259505316Z" level=info msg="CreateContainer within sandbox \"a8323384863e4fe0cd1fe74aba7c5c03ff8906a792c76236f2cbc48c2d9bc819\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:52:33.284984 containerd[1454]: time="2025-01-29T11:52:33.284874841Z" level=info msg="CreateContainer within sandbox \"b9d9be315ec8672612aa2d8ddafdf7b38faa1c32f4ea9b43e7519fcfbd368807\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"08f449db19a86dc905e877804aa78d0db819ba657afc4d19c123723677ecca68\"" Jan 29 11:52:33.286100 containerd[1454]: time="2025-01-29T11:52:33.286031861Z" level=info msg="StartContainer for \"08f449db19a86dc905e877804aa78d0db819ba657afc4d19c123723677ecca68\"" Jan 29 11:52:33.299390 containerd[1454]: time="2025-01-29T11:52:33.299319544Z" level=info msg="CreateContainer within sandbox \"a8323384863e4fe0cd1fe74aba7c5c03ff8906a792c76236f2cbc48c2d9bc819\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"46c9ab039734b00318661298686197e6618701503092f84cd856cc5a8acfdd7a\"" Jan 29 11:52:33.300569 containerd[1454]: time="2025-01-29T11:52:33.300364810Z" level=info msg="CreateContainer within sandbox \"7a2dea66ceb604dfb63a68ace49758e784005fb0d1918e96044ec304e97f47b5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5bd0129829a34b9e250b212b4b8a476a7eee9faaf844d7727c2fd17de4209b89\"" Jan 29 11:52:33.300843 containerd[1454]: time="2025-01-29T11:52:33.300632382Z" level=info msg="StartContainer for \"46c9ab039734b00318661298686197e6618701503092f84cd856cc5a8acfdd7a\"" Jan 29 11:52:33.301188 containerd[1454]: time="2025-01-29T11:52:33.301145531Z" level=info msg="StartContainer for \"5bd0129829a34b9e250b212b4b8a476a7eee9faaf844d7727c2fd17de4209b89\"" Jan 29 11:52:33.339401 systemd[1]: Started cri-containerd-08f449db19a86dc905e877804aa78d0db819ba657afc4d19c123723677ecca68.scope - libcontainer container 08f449db19a86dc905e877804aa78d0db819ba657afc4d19c123723677ecca68. Jan 29 11:52:33.344899 systemd[1]: Started cri-containerd-46c9ab039734b00318661298686197e6618701503092f84cd856cc5a8acfdd7a.scope - libcontainer container 46c9ab039734b00318661298686197e6618701503092f84cd856cc5a8acfdd7a. Jan 29 11:52:33.346219 systemd[1]: Started cri-containerd-5bd0129829a34b9e250b212b4b8a476a7eee9faaf844d7727c2fd17de4209b89.scope - libcontainer container 5bd0129829a34b9e250b212b4b8a476a7eee9faaf844d7727c2fd17de4209b89. Jan 29 11:52:33.404099 containerd[1454]: time="2025-01-29T11:52:33.403147131Z" level=info msg="StartContainer for \"08f449db19a86dc905e877804aa78d0db819ba657afc4d19c123723677ecca68\" returns successfully" Jan 29 11:52:33.416009 containerd[1454]: time="2025-01-29T11:52:33.415671507Z" level=info msg="StartContainer for \"46c9ab039734b00318661298686197e6618701503092f84cd856cc5a8acfdd7a\" returns successfully" Jan 29 11:52:33.422041 containerd[1454]: time="2025-01-29T11:52:33.421588596Z" level=info msg="StartContainer for \"5bd0129829a34b9e250b212b4b8a476a7eee9faaf844d7727c2fd17de4209b89\" returns successfully" Jan 29 11:52:34.248332 kubelet[2142]: E0129 11:52:34.248255 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:34.254100 kubelet[2142]: E0129 11:52:34.253947 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:34.254939 kubelet[2142]: E0129 11:52:34.254907 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:35.041465 kubelet[2142]: E0129 11:52:35.041414 2142 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 11:52:35.257653 kubelet[2142]: E0129 11:52:35.257595 2142 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:35.399874 kubelet[2142]: E0129 11:52:35.399834 2142 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 11:52:35.426087 kubelet[2142]: E0129 11:52:35.426045 2142 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:52:35.854348 kubelet[2142]: E0129 11:52:35.854169 2142 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 11:52:36.322538 kubelet[2142]: I0129 11:52:36.322474 2142 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:52:36.333351 kubelet[2142]: I0129 11:52:36.333305 2142 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:52:36.333351 kubelet[2142]: E0129 11:52:36.333340 2142 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:52:36.340300 kubelet[2142]: E0129 11:52:36.340252 2142 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:52:36.441075 kubelet[2142]: E0129 11:52:36.441021 2142 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:52:37.123620 systemd[1]: Reloading requested from client PID 2422 ('systemctl') (unit session-9.scope)... Jan 29 11:52:37.123636 systemd[1]: Reloading... Jan 29 11:52:37.207562 kubelet[2142]: I0129 11:52:37.207304 2142 apiserver.go:52] "Watching apiserver" Jan 29 11:52:37.212822 zram_generator::config[2464]: No configuration found. Jan 29 11:52:37.215177 kubelet[2142]: I0129 11:52:37.215135 2142 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:52:37.351208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:52:37.451686 systemd[1]: Reloading finished in 327 ms. Jan 29 11:52:37.503775 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:52:37.530971 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:52:37.531430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:52:37.531509 systemd[1]: kubelet.service: Consumed 1.213s CPU time, 121.4M memory peak, 0B memory swap peak. Jan 29 11:52:37.540023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:52:37.713796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:52:37.721365 (kubelet)[2506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:52:37.767115 kubelet[2506]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:52:37.767115 kubelet[2506]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:52:37.767115 kubelet[2506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:52:37.767115 kubelet[2506]: I0129 11:52:37.767100 2506 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:52:37.775210 kubelet[2506]: I0129 11:52:37.775154 2506 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:52:37.775210 kubelet[2506]: I0129 11:52:37.775194 2506 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:52:37.775546 kubelet[2506]: I0129 11:52:37.775519 2506 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:52:37.776870 kubelet[2506]: I0129 11:52:37.776846 2506 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:52:37.779041 kubelet[2506]: I0129 11:52:37.778825 2506 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:52:37.781960 kubelet[2506]: E0129 11:52:37.781914 2506 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:52:37.781960 kubelet[2506]: I0129 11:52:37.781961 2506 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:52:37.789272 kubelet[2506]: I0129 11:52:37.789238 2506 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:52:37.789554 kubelet[2506]: I0129 11:52:37.789534 2506 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:52:37.789817 kubelet[2506]: I0129 11:52:37.789739 2506 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:52:37.790140 kubelet[2506]: I0129 11:52:37.789820 2506 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:52:37.790140 kubelet[2506]: I0129 11:52:37.790135 2506 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:52:37.790296 kubelet[2506]: I0129 11:52:37.790148 2506 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:52:37.790296 kubelet[2506]: I0129 11:52:37.790199 2506 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:52:37.790389 kubelet[2506]: I0129 11:52:37.790377 2506 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:52:37.790425 kubelet[2506]: I0129 11:52:37.790404 2506 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:52:37.790458 kubelet[2506]: I0129 11:52:37.790443 2506 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:52:37.790490 kubelet[2506]: I0129 11:52:37.790460 2506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:52:37.791601 kubelet[2506]: I0129 11:52:37.791564 2506 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:52:37.793139 kubelet[2506]: I0129 11:52:37.793073 2506 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:52:37.795814 kubelet[2506]: I0129 11:52:37.794020 2506 server.go:1269] "Started kubelet" Jan 29 11:52:37.795814 kubelet[2506]: I0129 11:52:37.794340 2506 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:52:37.795814 kubelet[2506]: I0129 11:52:37.794466 2506 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:52:37.795814 kubelet[2506]: I0129 11:52:37.794937 2506 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:52:37.798217 kubelet[2506]: I0129 11:52:37.798184 2506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:52:37.800650 kubelet[2506]: I0129 11:52:37.800618 2506 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:52:37.801739 kubelet[2506]: I0129 11:52:37.801719 2506 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:52:37.805672 kubelet[2506]: I0129 11:52:37.805617 2506 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:52:37.805829 kubelet[2506]: I0129 11:52:37.805775 2506 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:52:37.806838 kubelet[2506]: I0129 11:52:37.806727 2506 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:52:37.807548 kubelet[2506]: I0129 11:52:37.807535 2506 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:52:37.808224 kubelet[2506]: I0129 11:52:37.808163 2506 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:52:37.808366 kubelet[2506]: I0129 11:52:37.808343 2506 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:52:37.811531 kubelet[2506]: E0129 11:52:37.811452 2506 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:52:37.819365 kubelet[2506]: I0129 11:52:37.818429 2506 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:52:37.821015 kubelet[2506]: I0129 11:52:37.820976 2506 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:52:37.821095 kubelet[2506]: I0129 11:52:37.821024 2506 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:52:37.821095 kubelet[2506]: I0129 11:52:37.821048 2506 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:52:37.821199 kubelet[2506]: E0129 11:52:37.821100 2506 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:52:37.843868 kubelet[2506]: I0129 11:52:37.843837 2506 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:52:37.843868 kubelet[2506]: I0129 11:52:37.843855 2506 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:52:37.843868 kubelet[2506]: I0129 11:52:37.843875 2506 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:52:37.844072 kubelet[2506]: I0129 11:52:37.844036 2506 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:52:37.844072 kubelet[2506]: I0129 11:52:37.844046 2506 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:52:37.844072 kubelet[2506]: I0129 11:52:37.844065 2506 policy_none.go:49] "None policy: Start" Jan 29 11:52:37.844636 kubelet[2506]: I0129 11:52:37.844619 2506 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:52:37.844673 kubelet[2506]: I0129 11:52:37.844642 2506 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:52:37.844864 kubelet[2506]: I0129 11:52:37.844840 2506 state_mem.go:75] "Updated machine memory state" Jan 29 11:52:37.848851 kubelet[2506]: I0129 11:52:37.848825 2506 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:52:37.849255 kubelet[2506]: I0129 11:52:37.849236 2506 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:52:37.849316 kubelet[2506]: I0129 11:52:37.849254 2506 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:52:37.849495 kubelet[2506]: I0129 11:52:37.849471 2506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:52:37.957771 kubelet[2506]: I0129 11:52:37.957716 2506 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 29 11:52:38.009553 kubelet[2506]: I0129 11:52:38.009356 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:52:38.009553 kubelet[2506]: I0129 11:52:38.009405 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de91e62537ce1fc82e80d539be4f9d43-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"de91e62537ce1fc82e80d539be4f9d43\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:52:38.009553 kubelet[2506]: I0129 11:52:38.009432 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de91e62537ce1fc82e80d539be4f9d43-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"de91e62537ce1fc82e80d539be4f9d43\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:52:38.009553 kubelet[2506]: I0129 11:52:38.009459 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:52:38.009553 kubelet[2506]: I0129 11:52:38.009482 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:52:38.009880 kubelet[2506]: I0129 11:52:38.009505 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:52:38.009880 kubelet[2506]: I0129 11:52:38.009526 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de91e62537ce1fc82e80d539be4f9d43-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"de91e62537ce1fc82e80d539be4f9d43\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:52:38.009880 kubelet[2506]: I0129 11:52:38.009561 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:52:38.009880 kubelet[2506]: I0129 11:52:38.009606 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:52:38.051895 kubelet[2506]: I0129 11:52:38.051707 2506 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 29 11:52:38.051895 kubelet[2506]: I0129 11:52:38.051858 2506 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 29 11:52:38.348641 kubelet[2506]: E0129 11:52:38.348304 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:38.348641 kubelet[2506]: E0129 11:52:38.348329 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:38.348641 kubelet[2506]: E0129 11:52:38.348329 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:38.792107 kubelet[2506]: I0129 11:52:38.792057 2506 apiserver.go:52] "Watching apiserver" Jan 29 11:52:38.807915 kubelet[2506]: I0129 11:52:38.807848 2506 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:52:38.833820 kubelet[2506]: E0129 11:52:38.833686 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:38.833820 kubelet[2506]: E0129 11:52:38.833688 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:38.861879 kubelet[2506]: E0129 11:52:38.861826 2506 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:52:38.862108 kubelet[2506]: E0129 11:52:38.862083 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:38.978198 kubelet[2506]: I0129 11:52:38.977613 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.977569356 podStartE2EDuration="1.977569356s" podCreationTimestamp="2025-01-29 11:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:52:38.977368835 +0000 UTC m=+1.250709064" watchObservedRunningTime="2025-01-29 11:52:38.977569356 +0000 UTC m=+1.250909585" Jan 29 11:52:39.021484 kubelet[2506]: I0129 11:52:39.021341 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.021312987 podStartE2EDuration="2.021312987s" podCreationTimestamp="2025-01-29 11:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:52:39.021316253 +0000 UTC m=+1.294656482" watchObservedRunningTime="2025-01-29 11:52:39.021312987 +0000 UTC m=+1.294653216" Jan 29 11:52:39.034904 kubelet[2506]: I0129 11:52:39.034824 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.034778942 podStartE2EDuration="2.034778942s" podCreationTimestamp="2025-01-29 11:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:52:39.034563153 +0000 UTC m=+1.307903412" watchObservedRunningTime="2025-01-29 11:52:39.034778942 +0000 UTC m=+1.308119171" Jan 29 11:52:39.836737 kubelet[2506]: E0129 11:52:39.835770 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:41.214894 kubelet[2506]: E0129 11:52:41.214759 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:41.492354 update_engine[1442]: I20250129 11:52:41.492174 1442 update_attempter.cc:509] Updating boot flags... Jan 29 11:52:41.574008 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2582) Jan 29 11:52:41.625834 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2586) Jan 29 11:52:41.660827 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2586) Jan 29 11:52:42.565774 sudo[1651]: pam_unix(sudo:session): session closed for user root Jan 29 11:52:42.571078 sshd[1648]: pam_unix(sshd:session): session closed for user core Jan 29 11:52:42.575549 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:42294.service: Deactivated successfully. Jan 29 11:52:42.577933 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:52:42.578187 systemd[1]: session-9.scope: Consumed 4.890s CPU time, 160.2M memory peak, 0B memory swap peak. Jan 29 11:52:42.578984 systemd-logind[1439]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:52:42.580294 systemd-logind[1439]: Removed session 9. Jan 29 11:52:43.307931 kubelet[2506]: I0129 11:52:43.307896 2506 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:52:43.309843 containerd[1454]: time="2025-01-29T11:52:43.308959806Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:52:43.310274 kubelet[2506]: I0129 11:52:43.309194 2506 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:52:44.160297 systemd[1]: Created slice kubepods-besteffort-podaafa91db_26b4_4889_89ab_c2f57a81f21d.slice - libcontainer container kubepods-besteffort-podaafa91db_26b4_4889_89ab_c2f57a81f21d.slice. Jan 29 11:52:44.246953 kubelet[2506]: I0129 11:52:44.246906 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aafa91db-26b4-4889-89ab-c2f57a81f21d-xtables-lock\") pod \"kube-proxy-ltx7l\" (UID: \"aafa91db-26b4-4889-89ab-c2f57a81f21d\") " pod="kube-system/kube-proxy-ltx7l" Jan 29 11:52:44.246953 kubelet[2506]: I0129 11:52:44.246946 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn5n4\" (UniqueName: \"kubernetes.io/projected/aafa91db-26b4-4889-89ab-c2f57a81f21d-kube-api-access-sn5n4\") pod \"kube-proxy-ltx7l\" (UID: \"aafa91db-26b4-4889-89ab-c2f57a81f21d\") " pod="kube-system/kube-proxy-ltx7l" Jan 29 11:52:44.246953 kubelet[2506]: I0129 11:52:44.246966 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aafa91db-26b4-4889-89ab-c2f57a81f21d-kube-proxy\") pod \"kube-proxy-ltx7l\" (UID: \"aafa91db-26b4-4889-89ab-c2f57a81f21d\") " pod="kube-system/kube-proxy-ltx7l" Jan 29 11:52:44.246953 kubelet[2506]: I0129 11:52:44.246979 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aafa91db-26b4-4889-89ab-c2f57a81f21d-lib-modules\") pod \"kube-proxy-ltx7l\" (UID: \"aafa91db-26b4-4889-89ab-c2f57a81f21d\") " pod="kube-system/kube-proxy-ltx7l" Jan 29 11:52:44.422640 systemd[1]: Created slice kubepods-besteffort-pod92c0fb48_57b8_44a4_b62e_b464f7ea030e.slice - libcontainer container kubepods-besteffort-pod92c0fb48_57b8_44a4_b62e_b464f7ea030e.slice. Jan 29 11:52:44.448036 kubelet[2506]: I0129 11:52:44.447956 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/92c0fb48-57b8-44a4-b62e-b464f7ea030e-var-lib-calico\") pod \"tigera-operator-76c4976dd7-stlm5\" (UID: \"92c0fb48-57b8-44a4-b62e-b464f7ea030e\") " pod="tigera-operator/tigera-operator-76c4976dd7-stlm5" Jan 29 11:52:44.448036 kubelet[2506]: I0129 11:52:44.448039 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr7jt\" (UniqueName: \"kubernetes.io/projected/92c0fb48-57b8-44a4-b62e-b464f7ea030e-kube-api-access-vr7jt\") pod \"tigera-operator-76c4976dd7-stlm5\" (UID: \"92c0fb48-57b8-44a4-b62e-b464f7ea030e\") " pod="tigera-operator/tigera-operator-76c4976dd7-stlm5" Jan 29 11:52:44.471293 kubelet[2506]: E0129 11:52:44.471238 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:44.472044 containerd[1454]: time="2025-01-29T11:52:44.471895206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ltx7l,Uid:aafa91db-26b4-4889-89ab-c2f57a81f21d,Namespace:kube-system,Attempt:0,}" Jan 29 11:52:44.501603 containerd[1454]: time="2025-01-29T11:52:44.501169983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:44.501603 containerd[1454]: time="2025-01-29T11:52:44.501281133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:44.501603 containerd[1454]: time="2025-01-29T11:52:44.501294508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:44.501603 containerd[1454]: time="2025-01-29T11:52:44.501426157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:44.524950 systemd[1]: Started cri-containerd-2a85fee4d6e6e2289a5099e5e25f61fb4a30043c70e56c06ddd2be7b42528d48.scope - libcontainer container 2a85fee4d6e6e2289a5099e5e25f61fb4a30043c70e56c06ddd2be7b42528d48. Jan 29 11:52:44.552219 containerd[1454]: time="2025-01-29T11:52:44.552168773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ltx7l,Uid:aafa91db-26b4-4889-89ab-c2f57a81f21d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a85fee4d6e6e2289a5099e5e25f61fb4a30043c70e56c06ddd2be7b42528d48\"" Jan 29 11:52:44.553076 kubelet[2506]: E0129 11:52:44.553037 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:44.556850 containerd[1454]: time="2025-01-29T11:52:44.556810575Z" level=info msg="CreateContainer within sandbox \"2a85fee4d6e6e2289a5099e5e25f61fb4a30043c70e56c06ddd2be7b42528d48\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:52:44.580952 containerd[1454]: time="2025-01-29T11:52:44.580864144Z" level=info msg="CreateContainer within sandbox \"2a85fee4d6e6e2289a5099e5e25f61fb4a30043c70e56c06ddd2be7b42528d48\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1542ced660a74635a31d4554bace540bce4b50a82922344ef2f46b93ca76bab8\"" Jan 29 11:52:44.581668 containerd[1454]: time="2025-01-29T11:52:44.581623431Z" level=info msg="StartContainer for \"1542ced660a74635a31d4554bace540bce4b50a82922344ef2f46b93ca76bab8\"" Jan 29 11:52:44.617997 systemd[1]: Started cri-containerd-1542ced660a74635a31d4554bace540bce4b50a82922344ef2f46b93ca76bab8.scope - libcontainer container 1542ced660a74635a31d4554bace540bce4b50a82922344ef2f46b93ca76bab8. Jan 29 11:52:44.658869 containerd[1454]: time="2025-01-29T11:52:44.657211303Z" level=info msg="StartContainer for \"1542ced660a74635a31d4554bace540bce4b50a82922344ef2f46b93ca76bab8\" returns successfully" Jan 29 11:52:44.726856 containerd[1454]: time="2025-01-29T11:52:44.726652906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-stlm5,Uid:92c0fb48-57b8-44a4-b62e-b464f7ea030e,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:52:44.759935 containerd[1454]: time="2025-01-29T11:52:44.759827272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:44.760137 containerd[1454]: time="2025-01-29T11:52:44.759905070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:44.760137 containerd[1454]: time="2025-01-29T11:52:44.759919697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:44.760137 containerd[1454]: time="2025-01-29T11:52:44.760023273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:44.781124 systemd[1]: Started cri-containerd-fee2982532e9e2a49307832df552e472a3e4a3c3236416c2c2adbb3d929806be.scope - libcontainer container fee2982532e9e2a49307832df552e472a3e4a3c3236416c2c2adbb3d929806be. Jan 29 11:52:44.828386 containerd[1454]: time="2025-01-29T11:52:44.828344867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-stlm5,Uid:92c0fb48-57b8-44a4-b62e-b464f7ea030e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fee2982532e9e2a49307832df552e472a3e4a3c3236416c2c2adbb3d929806be\"" Jan 29 11:52:44.830707 containerd[1454]: time="2025-01-29T11:52:44.830664256Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:52:44.844941 kubelet[2506]: E0129 11:52:44.844907 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:44.855049 kubelet[2506]: I0129 11:52:44.854971 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ltx7l" podStartSLOduration=0.854949664 podStartE2EDuration="854.949664ms" podCreationTimestamp="2025-01-29 11:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:52:44.854850266 +0000 UTC m=+7.128190505" watchObservedRunningTime="2025-01-29 11:52:44.854949664 +0000 UTC m=+7.128289893" Jan 29 11:52:44.874778 kubelet[2506]: E0129 11:52:44.874722 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:45.846776 kubelet[2506]: E0129 11:52:45.846716 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:46.296354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894994825.mount: Deactivated successfully. Jan 29 11:52:46.786750 kubelet[2506]: E0129 11:52:46.786692 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:46.848330 kubelet[2506]: E0129 11:52:46.848284 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:46.916411 containerd[1454]: time="2025-01-29T11:52:46.916334538Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:46.952233 containerd[1454]: time="2025-01-29T11:52:46.951863164Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:52:46.991535 containerd[1454]: time="2025-01-29T11:52:46.991463988Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:47.024149 containerd[1454]: time="2025-01-29T11:52:47.024060343Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:47.024757 containerd[1454]: time="2025-01-29T11:52:47.024707235Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.193988195s" Jan 29 11:52:47.024757 containerd[1454]: time="2025-01-29T11:52:47.024751418Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:52:47.027230 containerd[1454]: time="2025-01-29T11:52:47.027108191Z" level=info msg="CreateContainer within sandbox \"fee2982532e9e2a49307832df552e472a3e4a3c3236416c2c2adbb3d929806be\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:52:47.280701 containerd[1454]: time="2025-01-29T11:52:47.280620487Z" level=info msg="CreateContainer within sandbox \"fee2982532e9e2a49307832df552e472a3e4a3c3236416c2c2adbb3d929806be\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"59c55697d0cc86bb73cb2ab565088bec3cf86f714d4d2831220e08421a0c7574\"" Jan 29 11:52:47.281442 containerd[1454]: time="2025-01-29T11:52:47.281297055Z" level=info msg="StartContainer for \"59c55697d0cc86bb73cb2ab565088bec3cf86f714d4d2831220e08421a0c7574\"" Jan 29 11:52:47.321118 systemd[1]: Started cri-containerd-59c55697d0cc86bb73cb2ab565088bec3cf86f714d4d2831220e08421a0c7574.scope - libcontainer container 59c55697d0cc86bb73cb2ab565088bec3cf86f714d4d2831220e08421a0c7574. Jan 29 11:52:47.404028 containerd[1454]: time="2025-01-29T11:52:47.403975355Z" level=info msg="StartContainer for \"59c55697d0cc86bb73cb2ab565088bec3cf86f714d4d2831220e08421a0c7574\" returns successfully" Jan 29 11:52:50.671046 kubelet[2506]: I0129 11:52:50.669941 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-stlm5" podStartSLOduration=4.473805102 podStartE2EDuration="6.669919757s" podCreationTimestamp="2025-01-29 11:52:44 +0000 UTC" firstStartedPulling="2025-01-29 11:52:44.829717605 +0000 UTC m=+7.103057834" lastFinishedPulling="2025-01-29 11:52:47.02583226 +0000 UTC m=+9.299172489" observedRunningTime="2025-01-29 11:52:47.870737132 +0000 UTC m=+10.144077381" watchObservedRunningTime="2025-01-29 11:52:50.669919757 +0000 UTC m=+12.943259987" Jan 29 11:52:50.680271 systemd[1]: Created slice kubepods-besteffort-pod052adf0e_84fb_481e_a223_61a4b7a2f9a7.slice - libcontainer container kubepods-besteffort-pod052adf0e_84fb_481e_a223_61a4b7a2f9a7.slice. Jan 29 11:52:50.686673 kubelet[2506]: I0129 11:52:50.686548 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/052adf0e-84fb-481e-a223-61a4b7a2f9a7-typha-certs\") pod \"calico-typha-58556485cc-dwj9m\" (UID: \"052adf0e-84fb-481e-a223-61a4b7a2f9a7\") " pod="calico-system/calico-typha-58556485cc-dwj9m" Jan 29 11:52:50.686673 kubelet[2506]: I0129 11:52:50.686598 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcfvw\" (UniqueName: \"kubernetes.io/projected/052adf0e-84fb-481e-a223-61a4b7a2f9a7-kube-api-access-mcfvw\") pod \"calico-typha-58556485cc-dwj9m\" (UID: \"052adf0e-84fb-481e-a223-61a4b7a2f9a7\") " pod="calico-system/calico-typha-58556485cc-dwj9m" Jan 29 11:52:50.686673 kubelet[2506]: I0129 11:52:50.686629 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/052adf0e-84fb-481e-a223-61a4b7a2f9a7-tigera-ca-bundle\") pod \"calico-typha-58556485cc-dwj9m\" (UID: \"052adf0e-84fb-481e-a223-61a4b7a2f9a7\") " pod="calico-system/calico-typha-58556485cc-dwj9m" Jan 29 11:52:50.701864 systemd[1]: Created slice kubepods-besteffort-pod3c940188_3a91_4705_bd60_7146fc5afc94.slice - libcontainer container kubepods-besteffort-pod3c940188_3a91_4705_bd60_7146fc5afc94.slice. Jan 29 11:52:50.777727 kubelet[2506]: E0129 11:52:50.777633 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:52:50.787450 kubelet[2506]: I0129 11:52:50.787402 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c940188-3a91-4705-bd60-7146fc5afc94-tigera-ca-bundle\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.787450 kubelet[2506]: I0129 11:52:50.787445 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q2zr\" (UniqueName: \"kubernetes.io/projected/3c940188-3a91-4705-bd60-7146fc5afc94-kube-api-access-8q2zr\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.787450 kubelet[2506]: I0129 11:52:50.787464 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds9lz\" (UniqueName: \"kubernetes.io/projected/37021594-588c-4d5f-936f-12a90ea44463-kube-api-access-ds9lz\") pod \"csi-node-driver-m4sk6\" (UID: \"37021594-588c-4d5f-936f-12a90ea44463\") " pod="calico-system/csi-node-driver-m4sk6" Jan 29 11:52:50.787694 kubelet[2506]: I0129 11:52:50.787484 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-lib-modules\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.787694 kubelet[2506]: I0129 11:52:50.787570 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-xtables-lock\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.787694 kubelet[2506]: I0129 11:52:50.787588 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-net-dir\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.787694 kubelet[2506]: I0129 11:52:50.787644 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-policysync\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.787694 kubelet[2506]: I0129 11:52:50.787661 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-log-dir\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.787894 kubelet[2506]: I0129 11:52:50.787732 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-var-lib-calico\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.788473 kubelet[2506]: I0129 11:52:50.787749 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-flexvol-driver-host\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.788473 kubelet[2506]: I0129 11:52:50.788419 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3c940188-3a91-4705-bd60-7146fc5afc94-node-certs\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.788473 kubelet[2506]: I0129 11:52:50.788442 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-var-run-calico\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.788604 kubelet[2506]: I0129 11:52:50.788480 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-bin-dir\") pod \"calico-node-b8wm5\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " pod="calico-system/calico-node-b8wm5" Jan 29 11:52:50.788604 kubelet[2506]: I0129 11:52:50.788499 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37021594-588c-4d5f-936f-12a90ea44463-kubelet-dir\") pod \"csi-node-driver-m4sk6\" (UID: \"37021594-588c-4d5f-936f-12a90ea44463\") " pod="calico-system/csi-node-driver-m4sk6" Jan 29 11:52:50.788604 kubelet[2506]: I0129 11:52:50.788514 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/37021594-588c-4d5f-936f-12a90ea44463-registration-dir\") pod \"csi-node-driver-m4sk6\" (UID: \"37021594-588c-4d5f-936f-12a90ea44463\") " pod="calico-system/csi-node-driver-m4sk6" Jan 29 11:52:50.788604 kubelet[2506]: I0129 11:52:50.788546 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/37021594-588c-4d5f-936f-12a90ea44463-varrun\") pod \"csi-node-driver-m4sk6\" (UID: \"37021594-588c-4d5f-936f-12a90ea44463\") " pod="calico-system/csi-node-driver-m4sk6" Jan 29 11:52:50.788604 kubelet[2506]: I0129 11:52:50.788574 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/37021594-588c-4d5f-936f-12a90ea44463-socket-dir\") pod \"csi-node-driver-m4sk6\" (UID: \"37021594-588c-4d5f-936f-12a90ea44463\") " pod="calico-system/csi-node-driver-m4sk6" Jan 29 11:52:50.890741 kubelet[2506]: E0129 11:52:50.890668 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:50.890741 kubelet[2506]: W0129 11:52:50.890720 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:50.890741 kubelet[2506]: E0129 11:52:50.890750 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:50.893580 kubelet[2506]: E0129 11:52:50.893547 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:50.893580 kubelet[2506]: W0129 11:52:50.893570 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:50.893661 kubelet[2506]: E0129 11:52:50.893594 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:50.904063 kubelet[2506]: E0129 11:52:50.904032 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:50.904063 kubelet[2506]: W0129 11:52:50.904055 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:50.904260 kubelet[2506]: E0129 11:52:50.904080 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:50.904351 kubelet[2506]: E0129 11:52:50.904337 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:50.904395 kubelet[2506]: W0129 11:52:50.904350 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:50.904395 kubelet[2506]: E0129 11:52:50.904362 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:50.985932 kubelet[2506]: E0129 11:52:50.985774 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:50.986407 containerd[1454]: time="2025-01-29T11:52:50.986373270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58556485cc-dwj9m,Uid:052adf0e-84fb-481e-a223-61a4b7a2f9a7,Namespace:calico-system,Attempt:0,}" Jan 29 11:52:51.007617 kubelet[2506]: E0129 11:52:51.007560 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:51.008238 containerd[1454]: time="2025-01-29T11:52:51.008172671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b8wm5,Uid:3c940188-3a91-4705-bd60-7146fc5afc94,Namespace:calico-system,Attempt:0,}" Jan 29 11:52:51.135633 containerd[1454]: time="2025-01-29T11:52:51.135491542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:51.135947 containerd[1454]: time="2025-01-29T11:52:51.135660831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:51.135947 containerd[1454]: time="2025-01-29T11:52:51.135683914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:51.136684 containerd[1454]: time="2025-01-29T11:52:51.136520953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:52:51.136684 containerd[1454]: time="2025-01-29T11:52:51.136581597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:52:51.136684 containerd[1454]: time="2025-01-29T11:52:51.136594041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:51.136778 containerd[1454]: time="2025-01-29T11:52:51.136687066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:51.136995 containerd[1454]: time="2025-01-29T11:52:51.136868047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:52:51.164402 systemd[1]: Started cri-containerd-bed84a6e0e0a6def8b5b3563b68c45faf823d65d6493c30da12b1bfb706139f1.scope - libcontainer container bed84a6e0e0a6def8b5b3563b68c45faf823d65d6493c30da12b1bfb706139f1. Jan 29 11:52:51.169344 systemd[1]: Started cri-containerd-cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692.scope - libcontainer container cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692. Jan 29 11:52:51.205949 containerd[1454]: time="2025-01-29T11:52:51.205759845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b8wm5,Uid:3c940188-3a91-4705-bd60-7146fc5afc94,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\"" Jan 29 11:52:51.208528 kubelet[2506]: E0129 11:52:51.208444 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:51.212108 containerd[1454]: time="2025-01-29T11:52:51.209734170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:52:51.216202 containerd[1454]: time="2025-01-29T11:52:51.216164406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58556485cc-dwj9m,Uid:052adf0e-84fb-481e-a223-61a4b7a2f9a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"bed84a6e0e0a6def8b5b3563b68c45faf823d65d6493c30da12b1bfb706139f1\"" Jan 29 11:52:51.217189 kubelet[2506]: E0129 11:52:51.217142 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:51.236537 kubelet[2506]: E0129 11:52:51.236376 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:51.285516 kubelet[2506]: E0129 11:52:51.285484 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.285516 kubelet[2506]: W0129 11:52:51.285506 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.285516 kubelet[2506]: E0129 11:52:51.285527 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.285756 kubelet[2506]: E0129 11:52:51.285743 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.285756 kubelet[2506]: W0129 11:52:51.285754 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.285821 kubelet[2506]: E0129 11:52:51.285763 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.286009 kubelet[2506]: E0129 11:52:51.285996 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.286009 kubelet[2506]: W0129 11:52:51.286006 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.286062 kubelet[2506]: E0129 11:52:51.286015 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.286203 kubelet[2506]: E0129 11:52:51.286191 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.286203 kubelet[2506]: W0129 11:52:51.286200 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.286203 kubelet[2506]: E0129 11:52:51.286217 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.286486 kubelet[2506]: E0129 11:52:51.286459 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.286522 kubelet[2506]: W0129 11:52:51.286485 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.286522 kubelet[2506]: E0129 11:52:51.286511 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.286772 kubelet[2506]: E0129 11:52:51.286757 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.286772 kubelet[2506]: W0129 11:52:51.286768 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.286851 kubelet[2506]: E0129 11:52:51.286777 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.287023 kubelet[2506]: E0129 11:52:51.286996 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.287023 kubelet[2506]: W0129 11:52:51.287008 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.287023 kubelet[2506]: E0129 11:52:51.287018 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.287259 kubelet[2506]: E0129 11:52:51.287240 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.287259 kubelet[2506]: W0129 11:52:51.287253 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.287343 kubelet[2506]: E0129 11:52:51.287263 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.287534 kubelet[2506]: E0129 11:52:51.287518 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.287534 kubelet[2506]: W0129 11:52:51.287530 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.287589 kubelet[2506]: E0129 11:52:51.287539 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.287755 kubelet[2506]: E0129 11:52:51.287741 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.287755 kubelet[2506]: W0129 11:52:51.287752 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.287837 kubelet[2506]: E0129 11:52:51.287761 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.287978 kubelet[2506]: E0129 11:52:51.287955 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.287978 kubelet[2506]: W0129 11:52:51.287968 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.287978 kubelet[2506]: E0129 11:52:51.287978 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.288167 kubelet[2506]: E0129 11:52:51.288153 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.288167 kubelet[2506]: W0129 11:52:51.288165 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.288224 kubelet[2506]: E0129 11:52:51.288173 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.288380 kubelet[2506]: E0129 11:52:51.288366 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.288380 kubelet[2506]: W0129 11:52:51.288376 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.288434 kubelet[2506]: E0129 11:52:51.288385 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.288572 kubelet[2506]: E0129 11:52:51.288558 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.288572 kubelet[2506]: W0129 11:52:51.288569 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.288619 kubelet[2506]: E0129 11:52:51.288577 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:51.288765 kubelet[2506]: E0129 11:52:51.288750 2506 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:52:51.288765 kubelet[2506]: W0129 11:52:51.288761 2506 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:52:51.288837 kubelet[2506]: E0129 11:52:51.288769 2506 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:52:52.746577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount719207915.mount: Deactivated successfully. Jan 29 11:52:52.812913 containerd[1454]: time="2025-01-29T11:52:52.812858553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:52.813812 containerd[1454]: time="2025-01-29T11:52:52.813764381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:52:52.814996 containerd[1454]: time="2025-01-29T11:52:52.814968151Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:52.817119 containerd[1454]: time="2025-01-29T11:52:52.817063029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:52.817618 containerd[1454]: time="2025-01-29T11:52:52.817587899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.607724585s" Jan 29 11:52:52.817651 containerd[1454]: time="2025-01-29T11:52:52.817617625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:52:52.818510 containerd[1454]: time="2025-01-29T11:52:52.818490209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:52:52.819466 containerd[1454]: time="2025-01-29T11:52:52.819382031Z" level=info msg="CreateContainer within sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:52:52.821818 kubelet[2506]: E0129 11:52:52.821763 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:52:52.838388 containerd[1454]: time="2025-01-29T11:52:52.838336653Z" level=info msg="CreateContainer within sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d\"" Jan 29 11:52:52.838974 containerd[1454]: time="2025-01-29T11:52:52.838922307Z" level=info msg="StartContainer for \"015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d\"" Jan 29 11:52:52.875963 systemd[1]: Started cri-containerd-015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d.scope - libcontainer container 015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d. Jan 29 11:52:52.923314 systemd[1]: cri-containerd-015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d.scope: Deactivated successfully. Jan 29 11:52:52.957393 containerd[1454]: time="2025-01-29T11:52:52.957327789Z" level=info msg="StartContainer for \"015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d\" returns successfully" Jan 29 11:52:52.994173 containerd[1454]: time="2025-01-29T11:52:52.994100480Z" level=info msg="shim disconnected" id=015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d namespace=k8s.io Jan 29 11:52:52.994173 containerd[1454]: time="2025-01-29T11:52:52.994156446Z" level=warning msg="cleaning up after shim disconnected" id=015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d namespace=k8s.io Jan 29 11:52:52.994173 containerd[1454]: time="2025-01-29T11:52:52.994170172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:52:53.833775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d-rootfs.mount: Deactivated successfully. Jan 29 11:52:53.872711 kubelet[2506]: E0129 11:52:53.872673 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:54.821808 kubelet[2506]: E0129 11:52:54.821649 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:52:55.756524 containerd[1454]: time="2025-01-29T11:52:55.756436843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:55.757870 containerd[1454]: time="2025-01-29T11:52:55.757800380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 29 11:52:55.761189 containerd[1454]: time="2025-01-29T11:52:55.761094685Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:55.764289 containerd[1454]: time="2025-01-29T11:52:55.764238125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:52:55.765401 containerd[1454]: time="2025-01-29T11:52:55.765347885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.946825995s" Jan 29 11:52:55.765401 containerd[1454]: time="2025-01-29T11:52:55.765406856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:52:55.766843 containerd[1454]: time="2025-01-29T11:52:55.766777647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:52:55.777842 containerd[1454]: time="2025-01-29T11:52:55.776698923Z" level=info msg="CreateContainer within sandbox \"bed84a6e0e0a6def8b5b3563b68c45faf823d65d6493c30da12b1bfb706139f1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:52:55.798829 containerd[1454]: time="2025-01-29T11:52:55.798687305Z" level=info msg="CreateContainer within sandbox \"bed84a6e0e0a6def8b5b3563b68c45faf823d65d6493c30da12b1bfb706139f1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b9bb4091c016ef4daa2ecd64533e39569969502c480f432060fa2e5c11a978ef\"" Jan 29 11:52:55.799776 containerd[1454]: time="2025-01-29T11:52:55.799692168Z" level=info msg="StartContainer for \"b9bb4091c016ef4daa2ecd64533e39569969502c480f432060fa2e5c11a978ef\"" Jan 29 11:52:55.836012 systemd[1]: Started cri-containerd-b9bb4091c016ef4daa2ecd64533e39569969502c480f432060fa2e5c11a978ef.scope - libcontainer container b9bb4091c016ef4daa2ecd64533e39569969502c480f432060fa2e5c11a978ef. Jan 29 11:52:55.886534 containerd[1454]: time="2025-01-29T11:52:55.886475170Z" level=info msg="StartContainer for \"b9bb4091c016ef4daa2ecd64533e39569969502c480f432060fa2e5c11a978ef\" returns successfully" Jan 29 11:52:56.821543 kubelet[2506]: E0129 11:52:56.821449 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:52:56.881654 kubelet[2506]: E0129 11:52:56.881614 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:57.882039 kubelet[2506]: I0129 11:52:57.882004 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:52:57.882550 kubelet[2506]: E0129 11:52:57.882317 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:52:58.822117 kubelet[2506]: E0129 11:52:58.822056 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:53:00.053518 kubelet[2506]: I0129 11:53:00.053244 2506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:53:00.053959 kubelet[2506]: E0129 11:53:00.053715 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:00.080420 kubelet[2506]: I0129 11:53:00.080338 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58556485cc-dwj9m" podStartSLOduration=5.533025502 podStartE2EDuration="10.08031713s" podCreationTimestamp="2025-01-29 11:52:50 +0000 UTC" firstStartedPulling="2025-01-29 11:52:51.219287205 +0000 UTC m=+13.492627434" lastFinishedPulling="2025-01-29 11:52:55.766578803 +0000 UTC m=+18.039919062" observedRunningTime="2025-01-29 11:52:56.894633872 +0000 UTC m=+19.167974101" watchObservedRunningTime="2025-01-29 11:53:00.08031713 +0000 UTC m=+22.353657359" Jan 29 11:53:00.821680 kubelet[2506]: E0129 11:53:00.821621 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:53:00.890236 kubelet[2506]: E0129 11:53:00.889851 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:01.388217 containerd[1454]: time="2025-01-29T11:53:01.388172189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:53:01.389298 containerd[1454]: time="2025-01-29T11:53:01.389258081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:53:01.390528 containerd[1454]: time="2025-01-29T11:53:01.390501690Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:53:01.393135 containerd[1454]: time="2025-01-29T11:53:01.393106559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:53:01.393902 containerd[1454]: time="2025-01-29T11:53:01.393867930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.627025471s" Jan 29 11:53:01.393902 containerd[1454]: time="2025-01-29T11:53:01.393899440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:53:01.400675 containerd[1454]: time="2025-01-29T11:53:01.400644225Z" level=info msg="CreateContainer within sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:53:01.420894 containerd[1454]: time="2025-01-29T11:53:01.420841839Z" level=info msg="CreateContainer within sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b\"" Jan 29 11:53:01.421466 containerd[1454]: time="2025-01-29T11:53:01.421438312Z" level=info msg="StartContainer for \"e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b\"" Jan 29 11:53:01.467994 systemd[1]: Started cri-containerd-e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b.scope - libcontainer container e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b. Jan 29 11:53:01.505465 containerd[1454]: time="2025-01-29T11:53:01.505387352Z" level=info msg="StartContainer for \"e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b\" returns successfully" Jan 29 11:53:02.238268 kubelet[2506]: E0129 11:53:02.238198 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:53:02.242623 kubelet[2506]: E0129 11:53:02.242579 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:03.210497 containerd[1454]: time="2025-01-29T11:53:03.210403748Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:53:03.213381 systemd[1]: cri-containerd-e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b.scope: Deactivated successfully. Jan 29 11:53:03.236950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b-rootfs.mount: Deactivated successfully. Jan 29 11:53:03.244454 kubelet[2506]: E0129 11:53:03.244297 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:03.272990 kubelet[2506]: I0129 11:53:03.256964 2506 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:53:03.307355 systemd[1]: Created slice kubepods-burstable-pod92902e69_b7b9_4835_bfaf_552ffb2affbd.slice - libcontainer container kubepods-burstable-pod92902e69_b7b9_4835_bfaf_552ffb2affbd.slice. Jan 29 11:53:03.399317 systemd[1]: Created slice kubepods-besteffort-podb6dfda52_b36b_4860_a295_437d50d36570.slice - libcontainer container kubepods-besteffort-podb6dfda52_b36b_4860_a295_437d50d36570.slice. Jan 29 11:53:03.407224 systemd[1]: Created slice kubepods-besteffort-pod3a86a3c6_ce04_4ad6_b5ff_50ddd63f199a.slice - libcontainer container kubepods-besteffort-pod3a86a3c6_ce04_4ad6_b5ff_50ddd63f199a.slice. Jan 29 11:53:03.412718 systemd[1]: Created slice kubepods-besteffort-podbdf8891f_6b1d_4211_be98_e56b0b0de0ad.slice - libcontainer container kubepods-besteffort-podbdf8891f_6b1d_4211_be98_e56b0b0de0ad.slice. Jan 29 11:53:03.419125 systemd[1]: Created slice kubepods-burstable-podfd1af382_4da8_46d6_b100_8da54f486a77.slice - libcontainer container kubepods-burstable-podfd1af382_4da8_46d6_b100_8da54f486a77.slice. Jan 29 11:53:03.431311 kubelet[2506]: I0129 11:53:03.431241 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79p9w\" (UniqueName: \"kubernetes.io/projected/92902e69-b7b9-4835-bfaf-552ffb2affbd-kube-api-access-79p9w\") pod \"coredns-6f6b679f8f-hfp24\" (UID: \"92902e69-b7b9-4835-bfaf-552ffb2affbd\") " pod="kube-system/coredns-6f6b679f8f-hfp24" Jan 29 11:53:03.431311 kubelet[2506]: I0129 11:53:03.431291 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92902e69-b7b9-4835-bfaf-552ffb2affbd-config-volume\") pod \"coredns-6f6b679f8f-hfp24\" (UID: \"92902e69-b7b9-4835-bfaf-552ffb2affbd\") " pod="kube-system/coredns-6f6b679f8f-hfp24" Jan 29 11:53:03.532117 kubelet[2506]: I0129 11:53:03.531924 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a-tigera-ca-bundle\") pod \"calico-kube-controllers-65d8d589cb-k69s4\" (UID: \"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a\") " pod="calico-system/calico-kube-controllers-65d8d589cb-k69s4" Jan 29 11:53:03.532117 kubelet[2506]: I0129 11:53:03.531990 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxfx6\" (UniqueName: \"kubernetes.io/projected/fd1af382-4da8-46d6-b100-8da54f486a77-kube-api-access-qxfx6\") pod \"coredns-6f6b679f8f-wck8n\" (UID: \"fd1af382-4da8-46d6-b100-8da54f486a77\") " pod="kube-system/coredns-6f6b679f8f-wck8n" Jan 29 11:53:03.532117 kubelet[2506]: I0129 11:53:03.532025 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bdf8891f-6b1d-4211-be98-e56b0b0de0ad-calico-apiserver-certs\") pod \"calico-apiserver-5f474569cb-m47z4\" (UID: \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\") " pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" Jan 29 11:53:03.532321 kubelet[2506]: I0129 11:53:03.532170 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6dfda52-b36b-4860-a295-437d50d36570-calico-apiserver-certs\") pod \"calico-apiserver-5f474569cb-q4cqv\" (UID: \"b6dfda52-b36b-4860-a295-437d50d36570\") " pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" Jan 29 11:53:03.532321 kubelet[2506]: I0129 11:53:03.532220 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxfbh\" (UniqueName: \"kubernetes.io/projected/bdf8891f-6b1d-4211-be98-e56b0b0de0ad-kube-api-access-jxfbh\") pod \"calico-apiserver-5f474569cb-m47z4\" (UID: \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\") " pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" Jan 29 11:53:03.532321 kubelet[2506]: I0129 11:53:03.532261 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2jt9\" (UniqueName: \"kubernetes.io/projected/b6dfda52-b36b-4860-a295-437d50d36570-kube-api-access-b2jt9\") pod \"calico-apiserver-5f474569cb-q4cqv\" (UID: \"b6dfda52-b36b-4860-a295-437d50d36570\") " pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" Jan 29 11:53:03.532321 kubelet[2506]: I0129 11:53:03.532277 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1af382-4da8-46d6-b100-8da54f486a77-config-volume\") pod \"coredns-6f6b679f8f-wck8n\" (UID: \"fd1af382-4da8-46d6-b100-8da54f486a77\") " pod="kube-system/coredns-6f6b679f8f-wck8n" Jan 29 11:53:03.532321 kubelet[2506]: I0129 11:53:03.532296 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkpj9\" (UniqueName: \"kubernetes.io/projected/3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a-kube-api-access-gkpj9\") pod \"calico-kube-controllers-65d8d589cb-k69s4\" (UID: \"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a\") " pod="calico-system/calico-kube-controllers-65d8d589cb-k69s4" Jan 29 11:53:03.550622 containerd[1454]: time="2025-01-29T11:53:03.550542665Z" level=info msg="shim disconnected" id=e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b namespace=k8s.io Jan 29 11:53:03.550622 containerd[1454]: time="2025-01-29T11:53:03.550605513Z" level=warning msg="cleaning up after shim disconnected" id=e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b namespace=k8s.io Jan 29 11:53:03.550622 containerd[1454]: time="2025-01-29T11:53:03.550614981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:53:03.610130 kubelet[2506]: E0129 11:53:03.610063 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:03.610900 containerd[1454]: time="2025-01-29T11:53:03.610846125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfp24,Uid:92902e69-b7b9-4835-bfaf-552ffb2affbd,Namespace:kube-system,Attempt:0,}" Jan 29 11:53:03.705544 containerd[1454]: time="2025-01-29T11:53:03.705117143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f474569cb-q4cqv,Uid:b6dfda52-b36b-4860-a295-437d50d36570,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:53:03.705683 containerd[1454]: time="2025-01-29T11:53:03.705620931Z" level=error msg="Failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.706093 containerd[1454]: time="2025-01-29T11:53:03.706057211Z" level=error msg="encountered an error cleaning up failed sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.706133 containerd[1454]: time="2025-01-29T11:53:03.706117373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfp24,Uid:92902e69-b7b9-4835-bfaf-552ffb2affbd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.706396 kubelet[2506]: E0129 11:53:03.706359 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.706472 kubelet[2506]: E0129 11:53:03.706452 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfp24" Jan 29 11:53:03.706515 kubelet[2506]: E0129 11:53:03.706479 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hfp24" Jan 29 11:53:03.706568 kubelet[2506]: E0129 11:53:03.706540 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hfp24_kube-system(92902e69-b7b9-4835-bfaf-552ffb2affbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hfp24_kube-system(92902e69-b7b9-4835-bfaf-552ffb2affbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfp24" podUID="92902e69-b7b9-4835-bfaf-552ffb2affbd" Jan 29 11:53:03.711153 containerd[1454]: time="2025-01-29T11:53:03.711120720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d8d589cb-k69s4,Uid:3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a,Namespace:calico-system,Attempt:0,}" Jan 29 11:53:03.718729 containerd[1454]: time="2025-01-29T11:53:03.718702293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f474569cb-m47z4,Uid:bdf8891f-6b1d-4211-be98-e56b0b0de0ad,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:53:03.722068 kubelet[2506]: E0129 11:53:03.722024 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:03.722393 containerd[1454]: time="2025-01-29T11:53:03.722365780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wck8n,Uid:fd1af382-4da8-46d6-b100-8da54f486a77,Namespace:kube-system,Attempt:0,}" Jan 29 11:53:03.800082 containerd[1454]: time="2025-01-29T11:53:03.799896527Z" level=error msg="Failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.800484 containerd[1454]: time="2025-01-29T11:53:03.800327948Z" level=error msg="encountered an error cleaning up failed sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.800484 containerd[1454]: time="2025-01-29T11:53:03.800386999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f474569cb-q4cqv,Uid:b6dfda52-b36b-4860-a295-437d50d36570,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.801260 kubelet[2506]: E0129 11:53:03.800863 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.801260 kubelet[2506]: E0129 11:53:03.800938 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" Jan 29 11:53:03.801260 kubelet[2506]: E0129 11:53:03.800960 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" Jan 29 11:53:03.801379 kubelet[2506]: E0129 11:53:03.801016 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f474569cb-q4cqv_calico-apiserver(b6dfda52-b36b-4860-a295-437d50d36570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f474569cb-q4cqv_calico-apiserver(b6dfda52-b36b-4860-a295-437d50d36570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" podUID="b6dfda52-b36b-4860-a295-437d50d36570" Jan 29 11:53:03.812885 containerd[1454]: time="2025-01-29T11:53:03.812835342Z" level=error msg="Failed to destroy network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.813448 containerd[1454]: time="2025-01-29T11:53:03.813420442Z" level=error msg="encountered an error cleaning up failed sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.813553 containerd[1454]: time="2025-01-29T11:53:03.813533044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d8d589cb-k69s4,Uid:3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.814468 kubelet[2506]: E0129 11:53:03.813932 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.814468 kubelet[2506]: E0129 11:53:03.814021 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65d8d589cb-k69s4" Jan 29 11:53:03.814468 kubelet[2506]: E0129 11:53:03.814044 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65d8d589cb-k69s4" Jan 29 11:53:03.814588 kubelet[2506]: E0129 11:53:03.814113 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65d8d589cb-k69s4_calico-system(3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65d8d589cb-k69s4_calico-system(3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65d8d589cb-k69s4" podUID="3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a" Jan 29 11:53:03.823730 containerd[1454]: time="2025-01-29T11:53:03.822691382Z" level=error msg="Failed to destroy network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.823730 containerd[1454]: time="2025-01-29T11:53:03.823175451Z" level=error msg="encountered an error cleaning up failed sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.823730 containerd[1454]: time="2025-01-29T11:53:03.823222519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wck8n,Uid:fd1af382-4da8-46d6-b100-8da54f486a77,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.823931 kubelet[2506]: E0129 11:53:03.823395 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.823931 kubelet[2506]: E0129 11:53:03.823450 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wck8n" Jan 29 11:53:03.823931 kubelet[2506]: E0129 11:53:03.823469 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wck8n" Jan 29 11:53:03.824031 kubelet[2506]: E0129 11:53:03.823506 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wck8n_kube-system(fd1af382-4da8-46d6-b100-8da54f486a77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wck8n_kube-system(fd1af382-4da8-46d6-b100-8da54f486a77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wck8n" podUID="fd1af382-4da8-46d6-b100-8da54f486a77" Jan 29 11:53:03.828904 systemd[1]: Created slice kubepods-besteffort-pod37021594_588c_4d5f_936f_12a90ea44463.slice - libcontainer container kubepods-besteffort-pod37021594_588c_4d5f_936f_12a90ea44463.slice. Jan 29 11:53:03.831450 containerd[1454]: time="2025-01-29T11:53:03.831419400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4sk6,Uid:37021594-588c-4d5f-936f-12a90ea44463,Namespace:calico-system,Attempt:0,}" Jan 29 11:53:03.832068 containerd[1454]: time="2025-01-29T11:53:03.832031630Z" level=error msg="Failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.832460 containerd[1454]: time="2025-01-29T11:53:03.832423217Z" level=error msg="encountered an error cleaning up failed sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.832503 containerd[1454]: time="2025-01-29T11:53:03.832479583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f474569cb-m47z4,Uid:bdf8891f-6b1d-4211-be98-e56b0b0de0ad,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.832765 kubelet[2506]: E0129 11:53:03.832723 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.832844 kubelet[2506]: E0129 11:53:03.832820 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" Jan 29 11:53:03.832878 kubelet[2506]: E0129 11:53:03.832850 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" Jan 29 11:53:03.832946 kubelet[2506]: E0129 11:53:03.832904 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f474569cb-m47z4_calico-apiserver(bdf8891f-6b1d-4211-be98-e56b0b0de0ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f474569cb-m47z4_calico-apiserver(bdf8891f-6b1d-4211-be98-e56b0b0de0ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" podUID="bdf8891f-6b1d-4211-be98-e56b0b0de0ad" Jan 29 11:53:03.899722 containerd[1454]: time="2025-01-29T11:53:03.899664079Z" level=error msg="Failed to destroy network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.900146 containerd[1454]: time="2025-01-29T11:53:03.900104928Z" level=error msg="encountered an error cleaning up failed sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.900187 containerd[1454]: time="2025-01-29T11:53:03.900169209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4sk6,Uid:37021594-588c-4d5f-936f-12a90ea44463,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.900497 kubelet[2506]: E0129 11:53:03.900439 2506 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:03.900554 kubelet[2506]: E0129 11:53:03.900527 2506 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m4sk6" Jan 29 11:53:03.900589 kubelet[2506]: E0129 11:53:03.900566 2506 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m4sk6" Jan 29 11:53:03.900665 kubelet[2506]: E0129 11:53:03.900629 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m4sk6_calico-system(37021594-588c-4d5f-936f-12a90ea44463)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m4sk6_calico-system(37021594-588c-4d5f-936f-12a90ea44463)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:53:04.243634 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6-shm.mount: Deactivated successfully. Jan 29 11:53:04.246858 kubelet[2506]: I0129 11:53:04.246775 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:53:04.247460 containerd[1454]: time="2025-01-29T11:53:04.247427183Z" level=info msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\"" Jan 29 11:53:04.247722 containerd[1454]: time="2025-01-29T11:53:04.247601601Z" level=info msg="Ensure that sandbox 34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf in task-service has been cleanup successfully" Jan 29 11:53:04.250168 kubelet[2506]: E0129 11:53:04.249901 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:04.250938 containerd[1454]: time="2025-01-29T11:53:04.250904290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:53:04.251360 kubelet[2506]: I0129 11:53:04.251318 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:53:04.252140 containerd[1454]: time="2025-01-29T11:53:04.252077134Z" level=info msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\"" Jan 29 11:53:04.252288 containerd[1454]: time="2025-01-29T11:53:04.252246362Z" level=info msg="Ensure that sandbox 35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb in task-service has been cleanup successfully" Jan 29 11:53:04.253101 kubelet[2506]: I0129 11:53:04.253077 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:53:04.254265 containerd[1454]: time="2025-01-29T11:53:04.254105447Z" level=info msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\"" Jan 29 11:53:04.254879 containerd[1454]: time="2025-01-29T11:53:04.254574418Z" level=info msg="Ensure that sandbox 972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498 in task-service has been cleanup successfully" Jan 29 11:53:04.254919 kubelet[2506]: I0129 11:53:04.254902 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:53:04.255751 containerd[1454]: time="2025-01-29T11:53:04.255717497Z" level=info msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\"" Jan 29 11:53:04.256866 kubelet[2506]: I0129 11:53:04.256819 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:53:04.257755 containerd[1454]: time="2025-01-29T11:53:04.257663985Z" level=info msg="Ensure that sandbox 9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62 in task-service has been cleanup successfully" Jan 29 11:53:04.259543 containerd[1454]: time="2025-01-29T11:53:04.259508553Z" level=info msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\"" Jan 29 11:53:04.260470 containerd[1454]: time="2025-01-29T11:53:04.259674214Z" level=info msg="Ensure that sandbox 133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b in task-service has been cleanup successfully" Jan 29 11:53:04.264437 kubelet[2506]: I0129 11:53:04.264413 2506 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:53:04.267092 containerd[1454]: time="2025-01-29T11:53:04.267056240Z" level=info msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\"" Jan 29 11:53:04.267236 containerd[1454]: time="2025-01-29T11:53:04.267225708Z" level=info msg="Ensure that sandbox 5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6 in task-service has been cleanup successfully" Jan 29 11:53:04.302403 containerd[1454]: time="2025-01-29T11:53:04.302255653Z" level=error msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" failed" error="failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:04.302921 kubelet[2506]: E0129 11:53:04.302695 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:53:04.302921 kubelet[2506]: E0129 11:53:04.302768 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf"} Jan 29 11:53:04.302921 kubelet[2506]: E0129 11:53:04.302857 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:04.302921 kubelet[2506]: E0129 11:53:04.302887 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" podUID="bdf8891f-6b1d-4211-be98-e56b0b0de0ad" Jan 29 11:53:04.318283 containerd[1454]: time="2025-01-29T11:53:04.318036145Z" level=error msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" failed" error="failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:04.318961 kubelet[2506]: E0129 11:53:04.318486 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:53:04.318961 kubelet[2506]: E0129 11:53:04.318826 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b"} Jan 29 11:53:04.318961 kubelet[2506]: E0129 11:53:04.318868 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6dfda52-b36b-4860-a295-437d50d36570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:04.318961 kubelet[2506]: E0129 11:53:04.318910 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6dfda52-b36b-4860-a295-437d50d36570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" podUID="b6dfda52-b36b-4860-a295-437d50d36570" Jan 29 11:53:04.320937 containerd[1454]: time="2025-01-29T11:53:04.320899538Z" level=error msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\" failed" error="failed to destroy network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:04.321272 kubelet[2506]: E0129 11:53:04.321160 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:53:04.321272 kubelet[2506]: E0129 11:53:04.321202 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62"} Jan 29 11:53:04.321272 kubelet[2506]: E0129 11:53:04.321223 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:04.321272 kubelet[2506]: E0129 11:53:04.321240 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65d8d589cb-k69s4" podUID="3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a" Jan 29 11:53:04.326078 containerd[1454]: time="2025-01-29T11:53:04.326049729Z" level=error msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\" failed" error="failed to destroy network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:04.326143 containerd[1454]: time="2025-01-29T11:53:04.326100284Z" level=error msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\" failed" error="failed to destroy network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:04.326383 kubelet[2506]: E0129 11:53:04.326349 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:53:04.326493 kubelet[2506]: E0129 11:53:04.326463 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498"} Jan 29 11:53:04.326493 kubelet[2506]: E0129 11:53:04.326495 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd1af382-4da8-46d6-b100-8da54f486a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:04.326709 kubelet[2506]: E0129 11:53:04.326531 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd1af382-4da8-46d6-b100-8da54f486a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wck8n" podUID="fd1af382-4da8-46d6-b100-8da54f486a77" Jan 29 11:53:04.326709 kubelet[2506]: E0129 11:53:04.326373 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:53:04.326709 kubelet[2506]: E0129 11:53:04.326557 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb"} Jan 29 11:53:04.326709 kubelet[2506]: E0129 11:53:04.326575 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37021594-588c-4d5f-936f-12a90ea44463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:04.326906 kubelet[2506]: E0129 11:53:04.326605 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37021594-588c-4d5f-936f-12a90ea44463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:53:04.332374 containerd[1454]: time="2025-01-29T11:53:04.332335464Z" level=error msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" failed" error="failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:04.332489 kubelet[2506]: E0129 11:53:04.332461 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:53:04.332530 kubelet[2506]: E0129 11:53:04.332506 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6"} Jan 29 11:53:04.332564 kubelet[2506]: E0129 11:53:04.332543 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92902e69-b7b9-4835-bfaf-552ffb2affbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:04.332610 kubelet[2506]: E0129 11:53:04.332579 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92902e69-b7b9-4835-bfaf-552ffb2affbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfp24" podUID="92902e69-b7b9-4835-bfaf-552ffb2affbd" Jan 29 11:53:08.551296 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:48932.service - OpenSSH per-connection server daemon (10.0.0.1:48932). Jan 29 11:53:08.681044 sshd[3594]: Accepted publickey for core from 10.0.0.1 port 48932 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:08.683315 sshd[3594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:08.700727 systemd-logind[1439]: New session 10 of user core. Jan 29 11:53:08.706078 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:53:08.870588 sshd[3594]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:08.875778 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:48932.service: Deactivated successfully. Jan 29 11:53:08.878468 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:53:08.879548 systemd-logind[1439]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:53:08.881408 systemd-logind[1439]: Removed session 10. Jan 29 11:53:09.413160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157026960.mount: Deactivated successfully. Jan 29 11:53:10.232759 containerd[1454]: time="2025-01-29T11:53:10.232681109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:53:10.233710 containerd[1454]: time="2025-01-29T11:53:10.233667742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:53:10.235258 containerd[1454]: time="2025-01-29T11:53:10.235193268Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:53:10.237290 containerd[1454]: time="2025-01-29T11:53:10.237232759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:53:10.237855 containerd[1454]: time="2025-01-29T11:53:10.237812577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.986844588s" Jan 29 11:53:10.237855 containerd[1454]: time="2025-01-29T11:53:10.237850708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:53:10.247116 containerd[1454]: time="2025-01-29T11:53:10.247067830Z" level=info msg="CreateContainer within sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:53:10.272359 containerd[1454]: time="2025-01-29T11:53:10.272306122Z" level=info msg="CreateContainer within sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d\"" Jan 29 11:53:10.272983 containerd[1454]: time="2025-01-29T11:53:10.272948869Z" level=info msg="StartContainer for \"3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d\"" Jan 29 11:53:10.355032 systemd[1]: Started cri-containerd-3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d.scope - libcontainer container 3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d. Jan 29 11:53:10.480443 containerd[1454]: time="2025-01-29T11:53:10.480357399Z" level=info msg="StartContainer for \"3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d\" returns successfully" Jan 29 11:53:10.516241 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:53:10.516419 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:53:10.543068 systemd[1]: cri-containerd-3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d.scope: Deactivated successfully. Jan 29 11:53:10.566715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d-rootfs.mount: Deactivated successfully. Jan 29 11:53:10.884369 containerd[1454]: time="2025-01-29T11:53:10.884198634Z" level=info msg="shim disconnected" id=3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d namespace=k8s.io Jan 29 11:53:10.884369 containerd[1454]: time="2025-01-29T11:53:10.884276951Z" level=warning msg="cleaning up after shim disconnected" id=3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d namespace=k8s.io Jan 29 11:53:10.884369 containerd[1454]: time="2025-01-29T11:53:10.884289073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:53:11.286018 kubelet[2506]: I0129 11:53:11.285972 2506 scope.go:117] "RemoveContainer" containerID="3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d" Jan 29 11:53:11.286643 kubelet[2506]: E0129 11:53:11.286071 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:11.288881 containerd[1454]: time="2025-01-29T11:53:11.288702886Z" level=info msg="CreateContainer within sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Jan 29 11:53:11.316469 containerd[1454]: time="2025-01-29T11:53:11.316419798Z" level=info msg="CreateContainer within sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176\"" Jan 29 11:53:11.317146 containerd[1454]: time="2025-01-29T11:53:11.317105876Z" level=info msg="StartContainer for \"6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176\"" Jan 29 11:53:11.354002 systemd[1]: Started cri-containerd-6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176.scope - libcontainer container 6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176. Jan 29 11:53:11.392141 containerd[1454]: time="2025-01-29T11:53:11.392038588Z" level=info msg="StartContainer for \"6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176\" returns successfully" Jan 29 11:53:11.462044 systemd[1]: cri-containerd-6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176.scope: Deactivated successfully. Jan 29 11:53:11.483616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176-rootfs.mount: Deactivated successfully. Jan 29 11:53:11.488646 containerd[1454]: time="2025-01-29T11:53:11.488578928Z" level=info msg="shim disconnected" id=6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176 namespace=k8s.io Jan 29 11:53:11.488646 containerd[1454]: time="2025-01-29T11:53:11.488642457Z" level=warning msg="cleaning up after shim disconnected" id=6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176 namespace=k8s.io Jan 29 11:53:11.488865 containerd[1454]: time="2025-01-29T11:53:11.488654630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:53:12.290410 kubelet[2506]: I0129 11:53:12.290372 2506 scope.go:117] "RemoveContainer" containerID="3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d" Jan 29 11:53:12.290923 kubelet[2506]: I0129 11:53:12.290757 2506 scope.go:117] "RemoveContainer" containerID="6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176" Jan 29 11:53:12.290923 kubelet[2506]: E0129 11:53:12.290869 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:12.291432 kubelet[2506]: E0129 11:53:12.291370 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-b8wm5_calico-system(3c940188-3a91-4705-bd60-7146fc5afc94)\"" pod="calico-system/calico-node-b8wm5" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" Jan 29 11:53:12.291972 containerd[1454]: time="2025-01-29T11:53:12.291931863Z" level=info msg="RemoveContainer for \"3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d\"" Jan 29 11:53:12.298347 containerd[1454]: time="2025-01-29T11:53:12.298233166Z" level=info msg="RemoveContainer for \"3863cc3c883c014968d7eee59119c82bd372e7484b60e27869726c7a15a6b57d\" returns successfully" Jan 29 11:53:13.884188 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:38786.service - OpenSSH per-connection server daemon (10.0.0.1:38786). Jan 29 11:53:13.940709 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 38786 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:13.942542 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:13.947604 systemd-logind[1439]: New session 11 of user core. Jan 29 11:53:13.954069 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:53:14.095016 sshd[3747]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:14.099377 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:38786.service: Deactivated successfully. Jan 29 11:53:14.101879 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:53:14.102535 systemd-logind[1439]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:53:14.103414 systemd-logind[1439]: Removed session 11. Jan 29 11:53:14.823094 containerd[1454]: time="2025-01-29T11:53:14.823028237Z" level=info msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\"" Jan 29 11:53:14.858049 containerd[1454]: time="2025-01-29T11:53:14.857974297Z" level=error msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\" failed" error="failed to destroy network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:14.858395 kubelet[2506]: E0129 11:53:14.858297 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:53:14.858782 kubelet[2506]: E0129 11:53:14.858395 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb"} Jan 29 11:53:14.858782 kubelet[2506]: E0129 11:53:14.858443 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37021594-588c-4d5f-936f-12a90ea44463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:14.858782 kubelet[2506]: E0129 11:53:14.858478 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37021594-588c-4d5f-936f-12a90ea44463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:53:14.876111 kubelet[2506]: I0129 11:53:14.876072 2506 scope.go:117] "RemoveContainer" containerID="6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176" Jan 29 11:53:14.876250 kubelet[2506]: E0129 11:53:14.876164 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:14.876288 kubelet[2506]: E0129 11:53:14.876267 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-b8wm5_calico-system(3c940188-3a91-4705-bd60-7146fc5afc94)\"" pod="calico-system/calico-node-b8wm5" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" Jan 29 11:53:15.822759 containerd[1454]: time="2025-01-29T11:53:15.822681683Z" level=info msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\"" Jan 29 11:53:15.824105 containerd[1454]: time="2025-01-29T11:53:15.824065180Z" level=info msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\"" Jan 29 11:53:15.858330 containerd[1454]: time="2025-01-29T11:53:15.858248192Z" level=error msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" failed" error="failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:15.858558 kubelet[2506]: E0129 11:53:15.858506 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:53:15.858962 kubelet[2506]: E0129 11:53:15.858576 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf"} Jan 29 11:53:15.858962 kubelet[2506]: E0129 11:53:15.858628 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:15.858962 kubelet[2506]: E0129 11:53:15.858659 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" podUID="bdf8891f-6b1d-4211-be98-e56b0b0de0ad" Jan 29 11:53:15.860319 containerd[1454]: time="2025-01-29T11:53:15.860277181Z" level=error msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" failed" error="failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:15.860570 kubelet[2506]: E0129 11:53:15.860515 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:53:15.860644 kubelet[2506]: E0129 11:53:15.860586 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b"} Jan 29 11:53:15.860644 kubelet[2506]: E0129 11:53:15.860632 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6dfda52-b36b-4860-a295-437d50d36570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:15.860712 kubelet[2506]: E0129 11:53:15.860663 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6dfda52-b36b-4860-a295-437d50d36570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" podUID="b6dfda52-b36b-4860-a295-437d50d36570" Jan 29 11:53:16.822370 containerd[1454]: time="2025-01-29T11:53:16.822285820Z" level=info msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\"" Jan 29 11:53:16.822370 containerd[1454]: time="2025-01-29T11:53:16.822374647Z" level=info msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\"" Jan 29 11:53:16.851621 containerd[1454]: time="2025-01-29T11:53:16.851546296Z" level=error msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\" failed" error="failed to destroy network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:16.852240 containerd[1454]: time="2025-01-29T11:53:16.852181458Z" level=error msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" failed" error="failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:16.852285 kubelet[2506]: E0129 11:53:16.851830 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:53:16.852285 kubelet[2506]: E0129 11:53:16.851892 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498"} Jan 29 11:53:16.852285 kubelet[2506]: E0129 11:53:16.851929 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd1af382-4da8-46d6-b100-8da54f486a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:16.852285 kubelet[2506]: E0129 11:53:16.851952 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd1af382-4da8-46d6-b100-8da54f486a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wck8n" podUID="fd1af382-4da8-46d6-b100-8da54f486a77" Jan 29 11:53:16.852519 kubelet[2506]: E0129 11:53:16.852389 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:53:16.852519 kubelet[2506]: E0129 11:53:16.852419 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6"} Jan 29 11:53:16.852519 kubelet[2506]: E0129 11:53:16.852442 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92902e69-b7b9-4835-bfaf-552ffb2affbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:16.852519 kubelet[2506]: E0129 11:53:16.852460 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92902e69-b7b9-4835-bfaf-552ffb2affbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfp24" podUID="92902e69-b7b9-4835-bfaf-552ffb2affbd" Jan 29 11:53:17.823170 containerd[1454]: time="2025-01-29T11:53:17.823098621Z" level=info msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\"" Jan 29 11:53:17.858552 containerd[1454]: time="2025-01-29T11:53:17.858486985Z" level=error msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\" failed" error="failed to destroy network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:17.859073 kubelet[2506]: E0129 11:53:17.858653 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:53:17.859073 kubelet[2506]: E0129 11:53:17.858738 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62"} Jan 29 11:53:17.859073 kubelet[2506]: E0129 11:53:17.858797 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:17.859073 kubelet[2506]: E0129 11:53:17.858828 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65d8d589cb-k69s4" podUID="3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a" Jan 29 11:53:19.107330 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:38802.service - OpenSSH per-connection server daemon (10.0.0.1:38802). Jan 29 11:53:19.143647 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 38802 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:19.145601 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:19.150581 systemd-logind[1439]: New session 12 of user core. Jan 29 11:53:19.165051 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:53:19.298125 sshd[3905]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:19.303149 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:38802.service: Deactivated successfully. Jan 29 11:53:19.305686 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:53:19.306539 systemd-logind[1439]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:53:19.307641 systemd-logind[1439]: Removed session 12. Jan 29 11:53:24.310422 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:49636.service - OpenSSH per-connection server daemon (10.0.0.1:49636). Jan 29 11:53:24.349501 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 49636 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:24.351851 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:24.357152 systemd-logind[1439]: New session 13 of user core. Jan 29 11:53:24.363962 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:53:24.493732 sshd[3920]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:24.505015 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:49636.service: Deactivated successfully. Jan 29 11:53:24.507509 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:53:24.509624 systemd-logind[1439]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:53:24.519370 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:49640.service - OpenSSH per-connection server daemon (10.0.0.1:49640). Jan 29 11:53:24.520562 systemd-logind[1439]: Removed session 13. Jan 29 11:53:24.552663 sshd[3935]: Accepted publickey for core from 10.0.0.1 port 49640 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:24.555106 sshd[3935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:24.560514 systemd-logind[1439]: New session 14 of user core. Jan 29 11:53:24.577052 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:53:24.737947 sshd[3935]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:24.746918 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:49640.service: Deactivated successfully. Jan 29 11:53:24.749126 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:53:24.752931 systemd-logind[1439]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:53:24.761285 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:49656.service - OpenSSH per-connection server daemon (10.0.0.1:49656). Jan 29 11:53:24.763767 systemd-logind[1439]: Removed session 14. Jan 29 11:53:24.810398 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 49656 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:24.812827 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:24.819834 systemd-logind[1439]: New session 15 of user core. Jan 29 11:53:24.830060 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:53:24.976029 sshd[3948]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:24.982028 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:49656.service: Deactivated successfully. Jan 29 11:53:24.985359 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:53:24.986661 systemd-logind[1439]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:53:24.988267 systemd-logind[1439]: Removed session 15. Jan 29 11:53:26.822823 containerd[1454]: time="2025-01-29T11:53:26.822748074Z" level=info msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\"" Jan 29 11:53:26.855193 containerd[1454]: time="2025-01-29T11:53:26.855117233Z" level=error msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" failed" error="failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:26.855475 kubelet[2506]: E0129 11:53:26.855411 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:53:26.855913 kubelet[2506]: E0129 11:53:26.855488 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b"} Jan 29 11:53:26.855913 kubelet[2506]: E0129 11:53:26.855542 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6dfda52-b36b-4860-a295-437d50d36570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:26.855913 kubelet[2506]: E0129 11:53:26.855567 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6dfda52-b36b-4860-a295-437d50d36570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" podUID="b6dfda52-b36b-4860-a295-437d50d36570" Jan 29 11:53:27.822968 kubelet[2506]: I0129 11:53:27.822704 2506 scope.go:117] "RemoveContainer" containerID="6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176" Jan 29 11:53:27.822968 kubelet[2506]: E0129 11:53:27.822807 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:27.823421 containerd[1454]: time="2025-01-29T11:53:27.823016569Z" level=info msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\"" Jan 29 11:53:27.825404 containerd[1454]: time="2025-01-29T11:53:27.825356409Z" level=info msg="CreateContainer within sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Jan 29 11:53:27.850390 containerd[1454]: time="2025-01-29T11:53:27.850326574Z" level=error msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\" failed" error="failed to destroy network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:27.850638 kubelet[2506]: E0129 11:53:27.850577 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:53:27.850706 kubelet[2506]: E0129 11:53:27.850640 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb"} Jan 29 11:53:27.850706 kubelet[2506]: E0129 11:53:27.850687 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37021594-588c-4d5f-936f-12a90ea44463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:27.850831 kubelet[2506]: E0129 11:53:27.850715 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37021594-588c-4d5f-936f-12a90ea44463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:53:27.993750 containerd[1454]: time="2025-01-29T11:53:27.993699832Z" level=info msg="CreateContainer within sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6\"" Jan 29 11:53:27.994421 containerd[1454]: time="2025-01-29T11:53:27.994081618Z" level=info msg="StartContainer for \"2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6\"" Jan 29 11:53:28.035929 systemd[1]: Started cri-containerd-2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6.scope - libcontainer container 2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6. Jan 29 11:53:28.132687 containerd[1454]: time="2025-01-29T11:53:28.132562648Z" level=info msg="StartContainer for \"2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6\" returns successfully" Jan 29 11:53:28.146692 systemd[1]: cri-containerd-2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6.scope: Deactivated successfully. Jan 29 11:53:28.197453 containerd[1454]: time="2025-01-29T11:53:28.197383952Z" level=info msg="shim disconnected" id=2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6 namespace=k8s.io Jan 29 11:53:28.197453 containerd[1454]: time="2025-01-29T11:53:28.197440349Z" level=warning msg="cleaning up after shim disconnected" id=2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6 namespace=k8s.io Jan 29 11:53:28.197453 containerd[1454]: time="2025-01-29T11:53:28.197450257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:53:28.329413 kubelet[2506]: I0129 11:53:28.329360 2506 scope.go:117] "RemoveContainer" containerID="6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176" Jan 29 11:53:28.329976 kubelet[2506]: I0129 11:53:28.329848 2506 scope.go:117] "RemoveContainer" containerID="2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6" Jan 29 11:53:28.330215 kubelet[2506]: E0129 11:53:28.330158 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:28.330373 kubelet[2506]: E0129 11:53:28.330346 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-b8wm5_calico-system(3c940188-3a91-4705-bd60-7146fc5afc94)\"" pod="calico-system/calico-node-b8wm5" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" Jan 29 11:53:28.334714 containerd[1454]: time="2025-01-29T11:53:28.334388983Z" level=info msg="RemoveContainer for \"6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176\"" Jan 29 11:53:28.338981 containerd[1454]: time="2025-01-29T11:53:28.338952344Z" level=info msg="RemoveContainer for \"6df60ebf42cda3910a5a3fcaee9e7b52777ca39571462934eb96138bbc2d8176\" returns successfully" Jan 29 11:53:28.822294 containerd[1454]: time="2025-01-29T11:53:28.822216625Z" level=info msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\"" Jan 29 11:53:28.852829 containerd[1454]: time="2025-01-29T11:53:28.852758481Z" level=error msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" failed" error="failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:28.853263 kubelet[2506]: E0129 11:53:28.853053 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:53:28.853263 kubelet[2506]: E0129 11:53:28.853128 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6"} Jan 29 11:53:28.853263 kubelet[2506]: E0129 11:53:28.853182 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92902e69-b7b9-4835-bfaf-552ffb2affbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:28.853263 kubelet[2506]: E0129 11:53:28.853232 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92902e69-b7b9-4835-bfaf-552ffb2affbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfp24" podUID="92902e69-b7b9-4835-bfaf-552ffb2affbd" Jan 29 11:53:28.934519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6-rootfs.mount: Deactivated successfully. Jan 29 11:53:29.822585 containerd[1454]: time="2025-01-29T11:53:29.822510679Z" level=info msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\"" Jan 29 11:53:29.823091 containerd[1454]: time="2025-01-29T11:53:29.822529054Z" level=info msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\"" Jan 29 11:53:29.855489 containerd[1454]: time="2025-01-29T11:53:29.855404144Z" level=error msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" failed" error="failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:29.856024 kubelet[2506]: E0129 11:53:29.855705 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:53:29.856024 kubelet[2506]: E0129 11:53:29.855772 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf"} Jan 29 11:53:29.856024 kubelet[2506]: E0129 11:53:29.855824 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:29.856024 kubelet[2506]: E0129 11:53:29.855849 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" podUID="bdf8891f-6b1d-4211-be98-e56b0b0de0ad" Jan 29 11:53:29.859533 containerd[1454]: time="2025-01-29T11:53:29.859482135Z" level=error msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\" failed" error="failed to destroy network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:29.859803 kubelet[2506]: E0129 11:53:29.859726 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:53:29.859933 kubelet[2506]: E0129 11:53:29.859890 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498"} Jan 29 11:53:29.859976 kubelet[2506]: E0129 11:53:29.859940 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd1af382-4da8-46d6-b100-8da54f486a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:29.860034 kubelet[2506]: E0129 11:53:29.859972 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd1af382-4da8-46d6-b100-8da54f486a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wck8n" podUID="fd1af382-4da8-46d6-b100-8da54f486a77" Jan 29 11:53:29.988335 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:49666.service - OpenSSH per-connection server daemon (10.0.0.1:49666). Jan 29 11:53:30.023968 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 49666 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:30.025777 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:30.030636 systemd-logind[1439]: New session 16 of user core. Jan 29 11:53:30.043104 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:53:30.157487 sshd[4141]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:30.161961 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:49666.service: Deactivated successfully. Jan 29 11:53:30.164416 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:53:30.165226 systemd-logind[1439]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:53:30.166295 systemd-logind[1439]: Removed session 16. Jan 29 11:53:32.823070 containerd[1454]: time="2025-01-29T11:53:32.823021381Z" level=info msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\"" Jan 29 11:53:32.858672 containerd[1454]: time="2025-01-29T11:53:32.858605442Z" level=error msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\" failed" error="failed to destroy network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:32.858922 kubelet[2506]: E0129 11:53:32.858876 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:53:32.859266 kubelet[2506]: E0129 11:53:32.858933 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62"} Jan 29 11:53:32.859266 kubelet[2506]: E0129 11:53:32.858975 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:32.859266 kubelet[2506]: E0129 11:53:32.859002 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65d8d589cb-k69s4" podUID="3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a" Jan 29 11:53:34.995745 kubelet[2506]: I0129 11:53:34.995692 2506 scope.go:117] "RemoveContainer" containerID="2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6" Jan 29 11:53:34.996240 kubelet[2506]: E0129 11:53:34.995821 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:34.996240 kubelet[2506]: E0129 11:53:34.995911 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-b8wm5_calico-system(3c940188-3a91-4705-bd60-7146fc5afc94)\"" pod="calico-system/calico-node-b8wm5" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" Jan 29 11:53:35.170996 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:42218.service - OpenSSH per-connection server daemon (10.0.0.1:42218). Jan 29 11:53:35.207480 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 42218 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:35.209224 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:35.213805 systemd-logind[1439]: New session 17 of user core. Jan 29 11:53:35.220938 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:53:35.331863 sshd[4178]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:35.335779 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:42218.service: Deactivated successfully. Jan 29 11:53:35.337983 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:53:35.338596 systemd-logind[1439]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:53:35.339444 systemd-logind[1439]: Removed session 17. Jan 29 11:53:39.822910 containerd[1454]: time="2025-01-29T11:53:39.822622851Z" level=info msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\"" Jan 29 11:53:39.852154 containerd[1454]: time="2025-01-29T11:53:39.852042235Z" level=error msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\" failed" error="failed to destroy network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:39.852389 kubelet[2506]: E0129 11:53:39.852320 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:53:39.852860 kubelet[2506]: E0129 11:53:39.852404 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb"} Jan 29 11:53:39.852860 kubelet[2506]: E0129 11:53:39.852455 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37021594-588c-4d5f-936f-12a90ea44463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:39.852860 kubelet[2506]: E0129 11:53:39.852489 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37021594-588c-4d5f-936f-12a90ea44463\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m4sk6" podUID="37021594-588c-4d5f-936f-12a90ea44463" Jan 29 11:53:40.344598 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:42226.service - OpenSSH per-connection server daemon (10.0.0.1:42226). Jan 29 11:53:40.383387 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 42226 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:40.385380 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:40.389445 systemd-logind[1439]: New session 18 of user core. Jan 29 11:53:40.399914 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:53:40.512133 sshd[4219]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:40.516332 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:42226.service: Deactivated successfully. Jan 29 11:53:40.518778 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:53:40.519533 systemd-logind[1439]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:53:40.520561 systemd-logind[1439]: Removed session 18. Jan 29 11:53:40.823431 containerd[1454]: time="2025-01-29T11:53:40.823040157Z" level=info msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\"" Jan 29 11:53:40.823431 containerd[1454]: time="2025-01-29T11:53:40.823115130Z" level=info msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\"" Jan 29 11:53:40.823431 containerd[1454]: time="2025-01-29T11:53:40.823219962Z" level=info msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\"" Jan 29 11:53:40.860250 containerd[1454]: time="2025-01-29T11:53:40.860191524Z" level=error msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" failed" error="failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:40.860484 kubelet[2506]: E0129 11:53:40.860423 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:53:40.860906 kubelet[2506]: E0129 11:53:40.860504 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b"} Jan 29 11:53:40.860906 kubelet[2506]: E0129 11:53:40.860543 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6dfda52-b36b-4860-a295-437d50d36570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:40.860906 kubelet[2506]: E0129 11:53:40.860568 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6dfda52-b36b-4860-a295-437d50d36570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" podUID="b6dfda52-b36b-4860-a295-437d50d36570" Jan 29 11:53:40.861668 containerd[1454]: time="2025-01-29T11:53:40.861574404Z" level=error msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" failed" error="failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:40.861839 kubelet[2506]: E0129 11:53:40.861744 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:53:40.861839 kubelet[2506]: E0129 11:53:40.861768 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf"} Jan 29 11:53:40.861839 kubelet[2506]: E0129 11:53:40.861804 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:40.861839 kubelet[2506]: E0129 11:53:40.861826 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" podUID="bdf8891f-6b1d-4211-be98-e56b0b0de0ad" Jan 29 11:53:40.883843 containerd[1454]: time="2025-01-29T11:53:40.883757717Z" level=error msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" failed" error="failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:40.884115 kubelet[2506]: E0129 11:53:40.884052 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:53:40.884115 kubelet[2506]: E0129 11:53:40.884112 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6"} Jan 29 11:53:40.884198 kubelet[2506]: E0129 11:53:40.884145 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92902e69-b7b9-4835-bfaf-552ffb2affbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:40.884198 kubelet[2506]: E0129 11:53:40.884168 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92902e69-b7b9-4835-bfaf-552ffb2affbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfp24" podUID="92902e69-b7b9-4835-bfaf-552ffb2affbd" Jan 29 11:53:43.822927 containerd[1454]: time="2025-01-29T11:53:43.822731336Z" level=info msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\"" Jan 29 11:53:43.873217 containerd[1454]: time="2025-01-29T11:53:43.873155275Z" level=error msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\" failed" error="failed to destroy network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:43.873475 kubelet[2506]: E0129 11:53:43.873425 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:53:43.873863 kubelet[2506]: E0129 11:53:43.873490 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498"} Jan 29 11:53:43.873863 kubelet[2506]: E0129 11:53:43.873530 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fd1af382-4da8-46d6-b100-8da54f486a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:43.873863 kubelet[2506]: E0129 11:53:43.873556 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fd1af382-4da8-46d6-b100-8da54f486a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wck8n" podUID="fd1af382-4da8-46d6-b100-8da54f486a77" Jan 29 11:53:44.876232 kubelet[2506]: I0129 11:53:44.876185 2506 scope.go:117] "RemoveContainer" containerID="2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6" Jan 29 11:53:44.876728 kubelet[2506]: E0129 11:53:44.876275 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:44.876728 kubelet[2506]: E0129 11:53:44.876376 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-b8wm5_calico-system(3c940188-3a91-4705-bd60-7146fc5afc94)\"" pod="calico-system/calico-node-b8wm5" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" Jan 29 11:53:45.523825 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:53942.service - OpenSSH per-connection server daemon (10.0.0.1:53942). Jan 29 11:53:45.562514 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 53942 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:45.564899 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:45.570270 systemd-logind[1439]: New session 19 of user core. Jan 29 11:53:45.576942 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:53:45.690838 sshd[4326]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:45.695441 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:53942.service: Deactivated successfully. Jan 29 11:53:45.697636 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:53:45.698521 systemd-logind[1439]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:53:45.699640 systemd-logind[1439]: Removed session 19. Jan 29 11:53:45.823353 containerd[1454]: time="2025-01-29T11:53:45.822430694Z" level=info msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\"" Jan 29 11:53:45.854081 containerd[1454]: time="2025-01-29T11:53:45.854021630Z" level=error msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\" failed" error="failed to destroy network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:45.854356 kubelet[2506]: E0129 11:53:45.854300 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:53:45.854531 kubelet[2506]: E0129 11:53:45.854365 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62"} Jan 29 11:53:45.854531 kubelet[2506]: E0129 11:53:45.854406 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:45.854531 kubelet[2506]: E0129 11:53:45.854431 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65d8d589cb-k69s4" podUID="3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a" Jan 29 11:53:50.703172 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:53946.service - OpenSSH per-connection server daemon (10.0.0.1:53946). Jan 29 11:53:50.776583 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 53946 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:50.778372 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:50.782436 systemd-logind[1439]: New session 20 of user core. Jan 29 11:53:50.792901 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:53:50.949591 sshd[4363]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:50.957048 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:53946.service: Deactivated successfully. Jan 29 11:53:50.960077 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:53:50.963462 systemd-logind[1439]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:53:50.966388 systemd-logind[1439]: Removed session 20. Jan 29 11:53:50.982579 containerd[1454]: time="2025-01-29T11:53:50.982520340Z" level=info msg="StopPodSandbox for \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\"" Jan 29 11:53:50.993298 containerd[1454]: time="2025-01-29T11:53:50.993111766Z" level=info msg="Container to stop \"015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:53:50.993298 containerd[1454]: time="2025-01-29T11:53:50.993141693Z" level=info msg="Container to stop \"2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:53:50.993298 containerd[1454]: time="2025-01-29T11:53:50.993153745Z" level=info msg="Container to stop \"e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:53:51.001326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692-shm.mount: Deactivated successfully. Jan 29 11:53:51.007025 systemd[1]: cri-containerd-cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692.scope: Deactivated successfully. Jan 29 11:53:51.053831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692-rootfs.mount: Deactivated successfully. Jan 29 11:53:51.061962 containerd[1454]: time="2025-01-29T11:53:51.061570127Z" level=info msg="shim disconnected" id=cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692 namespace=k8s.io Jan 29 11:53:51.061962 containerd[1454]: time="2025-01-29T11:53:51.061872614Z" level=warning msg="cleaning up after shim disconnected" id=cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692 namespace=k8s.io Jan 29 11:53:51.061962 containerd[1454]: time="2025-01-29T11:53:51.061883665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:53:51.103041 containerd[1454]: time="2025-01-29T11:53:51.102917521Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:53:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:53:51.107327 containerd[1454]: time="2025-01-29T11:53:51.107273123Z" level=info msg="TearDown network for sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" successfully" Jan 29 11:53:51.107327 containerd[1454]: time="2025-01-29T11:53:51.107318670Z" level=info msg="StopPodSandbox for \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" returns successfully" Jan 29 11:53:51.364524 kubelet[2506]: I0129 11:53:51.364355 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q2zr\" (UniqueName: \"kubernetes.io/projected/3c940188-3a91-4705-bd60-7146fc5afc94-kube-api-access-8q2zr\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.364524 kubelet[2506]: I0129 11:53:51.364422 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-policysync\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.364524 kubelet[2506]: I0129 11:53:51.364451 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c940188-3a91-4705-bd60-7146fc5afc94-tigera-ca-bundle\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.364524 kubelet[2506]: I0129 11:53:51.364466 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-var-run-calico\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.364524 kubelet[2506]: I0129 11:53:51.364488 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-xtables-lock\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.364524 kubelet[2506]: I0129 11:53:51.364494 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-policysync" (OuterVolumeSpecName: "policysync") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:53:51.365219 kubelet[2506]: I0129 11:53:51.364509 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-net-dir\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.365219 kubelet[2506]: I0129 11:53:51.364579 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-flexvol-driver-host\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.365219 kubelet[2506]: I0129 11:53:51.364608 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-var-lib-calico\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.365219 kubelet[2506]: I0129 11:53:51.364644 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3c940188-3a91-4705-bd60-7146fc5afc94-node-certs\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.365219 kubelet[2506]: I0129 11:53:51.364674 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-lib-modules\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.365219 kubelet[2506]: I0129 11:53:51.364696 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-log-dir\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.365371 kubelet[2506]: I0129 11:53:51.364713 2506 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-bin-dir\") pod \"3c940188-3a91-4705-bd60-7146fc5afc94\" (UID: \"3c940188-3a91-4705-bd60-7146fc5afc94\") " Jan 29 11:53:51.365371 kubelet[2506]: I0129 11:53:51.364771 2506 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-policysync\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.365371 kubelet[2506]: I0129 11:53:51.364546 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:53:51.365371 kubelet[2506]: I0129 11:53:51.364835 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:53:51.365371 kubelet[2506]: I0129 11:53:51.364876 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:53:51.365496 kubelet[2506]: I0129 11:53:51.364909 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:53:51.365496 kubelet[2506]: I0129 11:53:51.365056 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:53:51.365496 kubelet[2506]: I0129 11:53:51.365100 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:53:51.365496 kubelet[2506]: I0129 11:53:51.365126 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:53:51.365496 kubelet[2506]: I0129 11:53:51.365149 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:53:51.370261 systemd[1]: var-lib-kubelet-pods-3c940188\x2d3a91\x2d4705\x2dbd60\x2d7146fc5afc94-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8q2zr.mount: Deactivated successfully. Jan 29 11:53:51.370370 systemd[1]: var-lib-kubelet-pods-3c940188\x2d3a91\x2d4705\x2dbd60\x2d7146fc5afc94-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 29 11:53:51.371542 kubelet[2506]: I0129 11:53:51.371507 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c940188-3a91-4705-bd60-7146fc5afc94-node-certs" (OuterVolumeSpecName: "node-certs") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:53:51.371702 kubelet[2506]: I0129 11:53:51.371565 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c940188-3a91-4705-bd60-7146fc5afc94-kube-api-access-8q2zr" (OuterVolumeSpecName: "kube-api-access-8q2zr") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "kube-api-access-8q2zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:53:51.372314 kubelet[2506]: I0129 11:53:51.372291 2506 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c940188-3a91-4705-bd60-7146fc5afc94-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "3c940188-3a91-4705-bd60-7146fc5afc94" (UID: "3c940188-3a91-4705-bd60-7146fc5afc94"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:53:51.374295 systemd[1]: var-lib-kubelet-pods-3c940188\x2d3a91\x2d4705\x2dbd60\x2d7146fc5afc94-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 29 11:53:51.376046 kubelet[2506]: I0129 11:53:51.376023 2506 scope.go:117] "RemoveContainer" containerID="2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6" Jan 29 11:53:51.378964 containerd[1454]: time="2025-01-29T11:53:51.378863609Z" level=info msg="RemoveContainer for \"2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6\"" Jan 29 11:53:51.387905 systemd[1]: Removed slice kubepods-besteffort-pod3c940188_3a91_4705_bd60_7146fc5afc94.slice - libcontainer container kubepods-besteffort-pod3c940188_3a91_4705_bd60_7146fc5afc94.slice. Jan 29 11:53:51.388283 systemd[1]: kubepods-besteffort-pod3c940188_3a91_4705_bd60_7146fc5afc94.slice: Consumed 1.005s CPU time. Jan 29 11:53:51.465620 kubelet[2506]: I0129 11:53:51.465555 2506 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.465620 kubelet[2506]: I0129 11:53:51.465594 2506 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.465620 kubelet[2506]: I0129 11:53:51.465619 2506 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.465620 kubelet[2506]: I0129 11:53:51.465636 2506 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8q2zr\" (UniqueName: \"kubernetes.io/projected/3c940188-3a91-4705-bd60-7146fc5afc94-kube-api-access-8q2zr\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.465928 kubelet[2506]: I0129 11:53:51.465656 2506 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c940188-3a91-4705-bd60-7146fc5afc94-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.465928 kubelet[2506]: I0129 11:53:51.465666 2506 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.465928 kubelet[2506]: I0129 11:53:51.465685 2506 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.465928 kubelet[2506]: I0129 11:53:51.465700 2506 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.465928 kubelet[2506]: I0129 11:53:51.465709 2506 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.465928 kubelet[2506]: I0129 11:53:51.465719 2506 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c940188-3a91-4705-bd60-7146fc5afc94-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.465928 kubelet[2506]: I0129 11:53:51.465730 2506 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3c940188-3a91-4705-bd60-7146fc5afc94-node-certs\") on node \"localhost\" DevicePath \"\"" Jan 29 11:53:51.472494 containerd[1454]: time="2025-01-29T11:53:51.472451669Z" level=info msg="RemoveContainer for \"2e30c9a55dd5805821f16a342020f741bb2cedda60154c93a99851799562d8c6\" returns successfully" Jan 29 11:53:51.472819 kubelet[2506]: I0129 11:53:51.472766 2506 scope.go:117] "RemoveContainer" containerID="e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b" Jan 29 11:53:51.473962 containerd[1454]: time="2025-01-29T11:53:51.473928182Z" level=info msg="RemoveContainer for \"e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b\"" Jan 29 11:53:51.540165 containerd[1454]: time="2025-01-29T11:53:51.540077558Z" level=info msg="RemoveContainer for \"e6e1e03e9a4a7141495825a1374d145053d7bbd6940cbe6901e0b309429beb1b\" returns successfully" Jan 29 11:53:51.540614 kubelet[2506]: I0129 11:53:51.540561 2506 scope.go:117] "RemoveContainer" containerID="015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d" Jan 29 11:53:51.542278 containerd[1454]: time="2025-01-29T11:53:51.542246471Z" level=info msg="RemoveContainer for \"015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d\"" Jan 29 11:53:51.617029 containerd[1454]: time="2025-01-29T11:53:51.616600613Z" level=info msg="RemoveContainer for \"015b63112c8edad79069644a9e073aa861ee7aff102f9f3519343522f7d79e5d\" returns successfully" Jan 29 11:53:51.708185 kubelet[2506]: E0129 11:53:51.708129 2506 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" containerName="flexvol-driver" Jan 29 11:53:51.709086 kubelet[2506]: E0129 11:53:51.708396 2506 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" containerName="install-cni" Jan 29 11:53:51.709086 kubelet[2506]: E0129 11:53:51.708411 2506 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" containerName="calico-node" Jan 29 11:53:51.709086 kubelet[2506]: E0129 11:53:51.708423 2506 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" containerName="calico-node" Jan 29 11:53:51.709086 kubelet[2506]: I0129 11:53:51.708476 2506 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" containerName="calico-node" Jan 29 11:53:51.709086 kubelet[2506]: I0129 11:53:51.708484 2506 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" containerName="calico-node" Jan 29 11:53:51.709086 kubelet[2506]: E0129 11:53:51.708515 2506 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" containerName="calico-node" Jan 29 11:53:51.709086 kubelet[2506]: I0129 11:53:51.708540 2506 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" containerName="calico-node" Jan 29 11:53:51.721597 systemd[1]: Created slice kubepods-besteffort-podd9f4e85c_a8b5_4c2d_bba9_7e34ff26a31f.slice - libcontainer container kubepods-besteffort-podd9f4e85c_a8b5_4c2d_bba9_7e34ff26a31f.slice. Jan 29 11:53:51.823804 containerd[1454]: time="2025-01-29T11:53:51.823482601Z" level=info msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\"" Jan 29 11:53:51.823804 containerd[1454]: time="2025-01-29T11:53:51.823513239Z" level=info msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\"" Jan 29 11:53:51.826294 kubelet[2506]: I0129 11:53:51.826245 2506 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c940188-3a91-4705-bd60-7146fc5afc94" path="/var/lib/kubelet/pods/3c940188-3a91-4705-bd60-7146fc5afc94/volumes" Jan 29 11:53:51.870971 kubelet[2506]: I0129 11:53:51.869452 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-policysync\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.870971 kubelet[2506]: I0129 11:53:51.869536 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-var-run-calico\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.870971 kubelet[2506]: I0129 11:53:51.869583 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-cni-bin-dir\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.870971 kubelet[2506]: I0129 11:53:51.869638 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-lib-modules\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.870971 kubelet[2506]: I0129 11:53:51.869698 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-xtables-lock\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.871621 kubelet[2506]: I0129 11:53:51.869766 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-cni-log-dir\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.871621 kubelet[2506]: I0129 11:53:51.869901 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-var-lib-calico\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.871621 kubelet[2506]: I0129 11:53:51.869987 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-tigera-ca-bundle\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.871621 kubelet[2506]: I0129 11:53:51.870048 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-node-certs\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.871621 kubelet[2506]: I0129 11:53:51.870134 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-flexvol-driver-host\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.871981 kubelet[2506]: I0129 11:53:51.870197 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-cni-net-dir\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.871981 kubelet[2506]: I0129 11:53:51.870244 2506 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddx2j\" (UniqueName: \"kubernetes.io/projected/d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f-kube-api-access-ddx2j\") pod \"calico-node-nbcp9\" (UID: \"d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f\") " pod="calico-system/calico-node-nbcp9" Jan 29 11:53:51.879980 containerd[1454]: time="2025-01-29T11:53:51.879900207Z" level=error msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" failed" error="failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:51.880485 kubelet[2506]: E0129 11:53:51.880365 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:53:51.880581 kubelet[2506]: E0129 11:53:51.880500 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b"} Jan 29 11:53:51.880650 kubelet[2506]: E0129 11:53:51.880589 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6dfda52-b36b-4860-a295-437d50d36570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:51.880914 kubelet[2506]: E0129 11:53:51.880670 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6dfda52-b36b-4860-a295-437d50d36570\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" podUID="b6dfda52-b36b-4860-a295-437d50d36570" Jan 29 11:53:51.890351 containerd[1454]: time="2025-01-29T11:53:51.890279890Z" level=error msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" failed" error="failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:51.890691 kubelet[2506]: E0129 11:53:51.890610 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:53:51.890743 kubelet[2506]: E0129 11:53:51.890697 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf"} Jan 29 11:53:51.890849 kubelet[2506]: E0129 11:53:51.890760 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:51.890849 kubelet[2506]: E0129 11:53:51.890823 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdf8891f-6b1d-4211-be98-e56b0b0de0ad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" podUID="bdf8891f-6b1d-4211-be98-e56b0b0de0ad" Jan 29 11:53:52.332634 kubelet[2506]: E0129 11:53:52.332589 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:52.333840 containerd[1454]: time="2025-01-29T11:53:52.333264456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nbcp9,Uid:d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f,Namespace:calico-system,Attempt:0,}" Jan 29 11:53:52.365946 containerd[1454]: time="2025-01-29T11:53:52.365191973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:53:52.365946 containerd[1454]: time="2025-01-29T11:53:52.365904489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:53:52.365946 containerd[1454]: time="2025-01-29T11:53:52.365920781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:53:52.366175 containerd[1454]: time="2025-01-29T11:53:52.366009519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:53:52.392959 systemd[1]: Started cri-containerd-a5e65a9fb629bb061cae60937b65699208ebe0392799842008c014274d07c19d.scope - libcontainer container a5e65a9fb629bb061cae60937b65699208ebe0392799842008c014274d07c19d. Jan 29 11:53:52.426080 containerd[1454]: time="2025-01-29T11:53:52.426030517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nbcp9,Uid:d9f4e85c-a8b5-4c2d-bba9-7e34ff26a31f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5e65a9fb629bb061cae60937b65699208ebe0392799842008c014274d07c19d\"" Jan 29 11:53:52.426834 kubelet[2506]: E0129 11:53:52.426809 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:52.428847 containerd[1454]: time="2025-01-29T11:53:52.428819911Z" level=info msg="CreateContainer within sandbox \"a5e65a9fb629bb061cae60937b65699208ebe0392799842008c014274d07c19d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:53:52.447960 containerd[1454]: time="2025-01-29T11:53:52.447904259Z" level=info msg="CreateContainer within sandbox \"a5e65a9fb629bb061cae60937b65699208ebe0392799842008c014274d07c19d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41ea2ef2e2e0077993638539524a83cdd3fedebf4b632dbd0bf2087b67830595\"" Jan 29 11:53:52.448800 containerd[1454]: time="2025-01-29T11:53:52.448737727Z" level=info msg="StartContainer for \"41ea2ef2e2e0077993638539524a83cdd3fedebf4b632dbd0bf2087b67830595\"" Jan 29 11:53:52.484121 systemd[1]: Started cri-containerd-41ea2ef2e2e0077993638539524a83cdd3fedebf4b632dbd0bf2087b67830595.scope - libcontainer container 41ea2ef2e2e0077993638539524a83cdd3fedebf4b632dbd0bf2087b67830595. Jan 29 11:53:52.523191 containerd[1454]: time="2025-01-29T11:53:52.523124131Z" level=info msg="StartContainer for \"41ea2ef2e2e0077993638539524a83cdd3fedebf4b632dbd0bf2087b67830595\" returns successfully" Jan 29 11:53:52.565242 systemd[1]: cri-containerd-41ea2ef2e2e0077993638539524a83cdd3fedebf4b632dbd0bf2087b67830595.scope: Deactivated successfully. Jan 29 11:53:52.605448 containerd[1454]: time="2025-01-29T11:53:52.605286802Z" level=info msg="shim disconnected" id=41ea2ef2e2e0077993638539524a83cdd3fedebf4b632dbd0bf2087b67830595 namespace=k8s.io Jan 29 11:53:52.605448 containerd[1454]: time="2025-01-29T11:53:52.605351896Z" level=warning msg="cleaning up after shim disconnected" id=41ea2ef2e2e0077993638539524a83cdd3fedebf4b632dbd0bf2087b67830595 namespace=k8s.io Jan 29 11:53:52.605448 containerd[1454]: time="2025-01-29T11:53:52.605366643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:53:52.822356 containerd[1454]: time="2025-01-29T11:53:52.822245764Z" level=info msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\"" Jan 29 11:53:52.852507 containerd[1454]: time="2025-01-29T11:53:52.852419478Z" level=error msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" failed" error="failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:53:52.852819 kubelet[2506]: E0129 11:53:52.852713 2506 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:53:52.852962 kubelet[2506]: E0129 11:53:52.852828 2506 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6"} Jan 29 11:53:52.852962 kubelet[2506]: E0129 11:53:52.852882 2506 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92902e69-b7b9-4835-bfaf-552ffb2affbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:53:52.852962 kubelet[2506]: E0129 11:53:52.852920 2506 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92902e69-b7b9-4835-bfaf-552ffb2affbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hfp24" podUID="92902e69-b7b9-4835-bfaf-552ffb2affbd" Jan 29 11:53:53.393157 kubelet[2506]: E0129 11:53:53.393115 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:53.396224 containerd[1454]: time="2025-01-29T11:53:53.396182013Z" level=info msg="CreateContainer within sandbox \"a5e65a9fb629bb061cae60937b65699208ebe0392799842008c014274d07c19d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:53:53.562141 containerd[1454]: time="2025-01-29T11:53:53.562067137Z" level=info msg="CreateContainer within sandbox \"a5e65a9fb629bb061cae60937b65699208ebe0392799842008c014274d07c19d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b7079f3848171b851f80775d11c6141841db7f5f5ed6057fa4a173ac12ed75d9\"" Jan 29 11:53:53.563236 containerd[1454]: time="2025-01-29T11:53:53.563203822Z" level=info msg="StartContainer for \"b7079f3848171b851f80775d11c6141841db7f5f5ed6057fa4a173ac12ed75d9\"" Jan 29 11:53:53.597959 systemd[1]: Started cri-containerd-b7079f3848171b851f80775d11c6141841db7f5f5ed6057fa4a173ac12ed75d9.scope - libcontainer container b7079f3848171b851f80775d11c6141841db7f5f5ed6057fa4a173ac12ed75d9. Jan 29 11:53:53.636192 containerd[1454]: time="2025-01-29T11:53:53.636140209Z" level=info msg="StartContainer for \"b7079f3848171b851f80775d11c6141841db7f5f5ed6057fa4a173ac12ed75d9\" returns successfully" Jan 29 11:53:54.226050 systemd[1]: cri-containerd-b7079f3848171b851f80775d11c6141841db7f5f5ed6057fa4a173ac12ed75d9.scope: Deactivated successfully. Jan 29 11:53:54.250660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7079f3848171b851f80775d11c6141841db7f5f5ed6057fa4a173ac12ed75d9-rootfs.mount: Deactivated successfully. Jan 29 11:53:54.262002 containerd[1454]: time="2025-01-29T11:53:54.261926217Z" level=info msg="shim disconnected" id=b7079f3848171b851f80775d11c6141841db7f5f5ed6057fa4a173ac12ed75d9 namespace=k8s.io Jan 29 11:53:54.262002 containerd[1454]: time="2025-01-29T11:53:54.261983596Z" level=warning msg="cleaning up after shim disconnected" id=b7079f3848171b851f80775d11c6141841db7f5f5ed6057fa4a173ac12ed75d9 namespace=k8s.io Jan 29 11:53:54.262002 containerd[1454]: time="2025-01-29T11:53:54.261993655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:53:54.396806 kubelet[2506]: E0129 11:53:54.396723 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:54.406977 containerd[1454]: time="2025-01-29T11:53:54.406778220Z" level=info msg="CreateContainer within sandbox \"a5e65a9fb629bb061cae60937b65699208ebe0392799842008c014274d07c19d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:53:54.437096 containerd[1454]: time="2025-01-29T11:53:54.436284529Z" level=info msg="CreateContainer within sandbox \"a5e65a9fb629bb061cae60937b65699208ebe0392799842008c014274d07c19d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"60b41d89e54ef53b25c825f06db5fa5ee399afdfd59a2438f5a46723b4e3a773\"" Jan 29 11:53:54.437314 containerd[1454]: time="2025-01-29T11:53:54.437202897Z" level=info msg="StartContainer for \"60b41d89e54ef53b25c825f06db5fa5ee399afdfd59a2438f5a46723b4e3a773\"" Jan 29 11:53:54.475279 systemd[1]: Started cri-containerd-60b41d89e54ef53b25c825f06db5fa5ee399afdfd59a2438f5a46723b4e3a773.scope - libcontainer container 60b41d89e54ef53b25c825f06db5fa5ee399afdfd59a2438f5a46723b4e3a773. Jan 29 11:53:54.517390 containerd[1454]: time="2025-01-29T11:53:54.517253363Z" level=info msg="StartContainer for \"60b41d89e54ef53b25c825f06db5fa5ee399afdfd59a2438f5a46723b4e3a773\" returns successfully" Jan 29 11:53:55.401003 kubelet[2506]: E0129 11:53:55.400968 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:55.419459 kubelet[2506]: I0129 11:53:55.417994 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nbcp9" podStartSLOduration=4.417961613 podStartE2EDuration="4.417961613s" podCreationTimestamp="2025-01-29 11:53:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:53:55.416701045 +0000 UTC m=+77.690041304" watchObservedRunningTime="2025-01-29 11:53:55.417961613 +0000 UTC m=+77.691301842" Jan 29 11:53:55.827150 containerd[1454]: time="2025-01-29T11:53:55.827007709Z" level=info msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\"" Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.886 [INFO][4752] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.887 [INFO][4752] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" iface="eth0" netns="/var/run/netns/cni-d6d5c346-81b3-eff5-6cf3-0befbddbdb28" Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.887 [INFO][4752] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" iface="eth0" netns="/var/run/netns/cni-d6d5c346-81b3-eff5-6cf3-0befbddbdb28" Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.887 [INFO][4752] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" iface="eth0" netns="/var/run/netns/cni-d6d5c346-81b3-eff5-6cf3-0befbddbdb28" Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.887 [INFO][4752] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.887 [INFO][4752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.915 [INFO][4762] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" HandleID="k8s-pod-network.35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.915 [INFO][4762] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.915 [INFO][4762] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.922 [WARNING][4762] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" HandleID="k8s-pod-network.35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.922 [INFO][4762] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" HandleID="k8s-pod-network.35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.925 [INFO][4762] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:53:55.931368 containerd[1454]: 2025-01-29 11:53:55.928 [INFO][4752] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:53:55.931852 containerd[1454]: time="2025-01-29T11:53:55.931433611Z" level=info msg="TearDown network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\" successfully" Jan 29 11:53:55.931852 containerd[1454]: time="2025-01-29T11:53:55.931472245Z" level=info msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\" returns successfully" Jan 29 11:53:55.932535 containerd[1454]: time="2025-01-29T11:53:55.932495382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4sk6,Uid:37021594-588c-4d5f-936f-12a90ea44463,Namespace:calico-system,Attempt:1,}" Jan 29 11:53:55.934716 systemd[1]: run-netns-cni\x2dd6d5c346\x2d81b3\x2deff5\x2d6cf3\x2d0befbddbdb28.mount: Deactivated successfully. Jan 29 11:53:55.967103 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:35320.service - OpenSSH per-connection server daemon (10.0.0.1:35320). Jan 29 11:53:56.029227 sshd[4783]: Accepted publickey for core from 10.0.0.1 port 35320 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:53:56.033797 sshd[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:53:56.046838 systemd-logind[1439]: New session 21 of user core. Jan 29 11:53:56.059375 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:53:56.087596 systemd-networkd[1384]: cali3fdfabe24d4: Link UP Jan 29 11:53:56.088028 systemd-networkd[1384]: cali3fdfabe24d4: Gained carrier Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:55.973 [INFO][4771] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:55.988 [INFO][4771] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--m4sk6-eth0 csi-node-driver- calico-system 37021594-588c-4d5f-936f-12a90ea44463 1095 0 2025-01-29 11:52:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-m4sk6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3fdfabe24d4 [] []}} ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Namespace="calico-system" Pod="csi-node-driver-m4sk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4sk6-" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:55.988 [INFO][4771] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Namespace="calico-system" Pod="csi-node-driver-m4sk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.032 [INFO][4785] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" HandleID="k8s-pod-network.3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.044 [INFO][4785] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" HandleID="k8s-pod-network.3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f77c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-m4sk6", "timestamp":"2025-01-29 11:53:56.03246833 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.044 [INFO][4785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.044 [INFO][4785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.044 [INFO][4785] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.046 [INFO][4785] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" host="localhost" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.050 [INFO][4785] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.055 [INFO][4785] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.057 [INFO][4785] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.059 [INFO][4785] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.059 [INFO][4785] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" host="localhost" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.062 [INFO][4785] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593 Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.066 [INFO][4785] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" host="localhost" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.071 [INFO][4785] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" host="localhost" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.071 [INFO][4785] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" host="localhost" Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.071 [INFO][4785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:53:56.106548 containerd[1454]: 2025-01-29 11:53:56.071 [INFO][4785] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" HandleID="k8s-pod-network.3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:53:56.108490 containerd[1454]: 2025-01-29 11:53:56.074 [INFO][4771] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Namespace="calico-system" Pod="csi-node-driver-m4sk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4sk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m4sk6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37021594-588c-4d5f-936f-12a90ea44463", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-m4sk6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3fdfabe24d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:53:56.108490 containerd[1454]: 2025-01-29 11:53:56.074 [INFO][4771] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Namespace="calico-system" Pod="csi-node-driver-m4sk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:53:56.108490 containerd[1454]: 2025-01-29 11:53:56.074 [INFO][4771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fdfabe24d4 ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Namespace="calico-system" Pod="csi-node-driver-m4sk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:53:56.108490 containerd[1454]: 2025-01-29 11:53:56.090 [INFO][4771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Namespace="calico-system" Pod="csi-node-driver-m4sk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:53:56.108490 containerd[1454]: 2025-01-29 11:53:56.091 [INFO][4771] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Namespace="calico-system" Pod="csi-node-driver-m4sk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4sk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m4sk6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37021594-588c-4d5f-936f-12a90ea44463", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593", Pod:"csi-node-driver-m4sk6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3fdfabe24d4", MAC:"36:fd:7a:d6:ef:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:53:56.108490 containerd[1454]: 2025-01-29 11:53:56.101 [INFO][4771] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593" Namespace="calico-system" Pod="csi-node-driver-m4sk6" WorkloadEndpoint="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:53:56.145746 containerd[1454]: time="2025-01-29T11:53:56.145003495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:53:56.145746 containerd[1454]: time="2025-01-29T11:53:56.145063550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:53:56.145746 containerd[1454]: time="2025-01-29T11:53:56.145073499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:53:56.145746 containerd[1454]: time="2025-01-29T11:53:56.145150044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:53:56.173107 systemd[1]: Started cri-containerd-3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593.scope - libcontainer container 3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593. Jan 29 11:53:56.187831 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:53:56.201728 containerd[1454]: time="2025-01-29T11:53:56.201662684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4sk6,Uid:37021594-588c-4d5f-936f-12a90ea44463,Namespace:calico-system,Attempt:1,} returns sandbox id \"3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593\"" Jan 29 11:53:56.204103 containerd[1454]: time="2025-01-29T11:53:56.204058991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:53:56.219116 sshd[4783]: pam_unix(sshd:session): session closed for user core Jan 29 11:53:56.224165 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:35320.service: Deactivated successfully. Jan 29 11:53:56.226529 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:53:56.227293 systemd-logind[1439]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:53:56.228440 systemd-logind[1439]: Removed session 21. Jan 29 11:53:56.418173 kubelet[2506]: E0129 11:53:56.418139 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:56.822479 kubelet[2506]: E0129 11:53:56.822266 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:56.822927 containerd[1454]: time="2025-01-29T11:53:56.822629103Z" level=info msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\"" Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.885 [INFO][4900] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.885 [INFO][4900] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" iface="eth0" netns="/var/run/netns/cni-ddceea1d-1656-eeea-70b3-0ea72e656401" Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.885 [INFO][4900] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" iface="eth0" netns="/var/run/netns/cni-ddceea1d-1656-eeea-70b3-0ea72e656401" Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.887 [INFO][4900] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" iface="eth0" netns="/var/run/netns/cni-ddceea1d-1656-eeea-70b3-0ea72e656401" Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.887 [INFO][4900] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.887 [INFO][4900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.925 [INFO][4955] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" HandleID="k8s-pod-network.972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.926 [INFO][4955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.926 [INFO][4955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.938 [WARNING][4955] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" HandleID="k8s-pod-network.972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.938 [INFO][4955] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" HandleID="k8s-pod-network.972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.941 [INFO][4955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:53:56.955740 containerd[1454]: 2025-01-29 11:53:56.949 [INFO][4900] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:53:56.975048 containerd[1454]: time="2025-01-29T11:53:56.974843667Z" level=info msg="TearDown network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\" successfully" Jan 29 11:53:56.975048 containerd[1454]: time="2025-01-29T11:53:56.974903551Z" level=info msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\" returns successfully" Jan 29 11:53:56.975380 kubelet[2506]: E0129 11:53:56.975330 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:56.976223 containerd[1454]: time="2025-01-29T11:53:56.976166984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wck8n,Uid:fd1af382-4da8-46d6-b100-8da54f486a77,Namespace:kube-system,Attempt:1,}" Jan 29 11:53:56.979173 systemd[1]: run-netns-cni\x2dddceea1d\x2d1656\x2deeea\x2d70b3\x2d0ea72e656401.mount: Deactivated successfully. Jan 29 11:53:57.177830 kernel: bpftool[5070]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:53:57.222487 systemd-networkd[1384]: calie4c59bd09cc: Link UP Jan 29 11:53:57.223718 systemd-networkd[1384]: calie4c59bd09cc: Gained carrier Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.079 [INFO][5014] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.099 [INFO][5014] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--wck8n-eth0 coredns-6f6b679f8f- kube-system fd1af382-4da8-46d6-b100-8da54f486a77 1110 0 2025-01-29 11:52:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-wck8n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4c59bd09cc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-wck8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wck8n-" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.100 [INFO][5014] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-wck8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.162 [INFO][5051] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" HandleID="k8s-pod-network.51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.172 [INFO][5051] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" HandleID="k8s-pod-network.51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005c9f10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-wck8n", "timestamp":"2025-01-29 11:53:57.162180884 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.172 [INFO][5051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.172 [INFO][5051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.172 [INFO][5051] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.179 [INFO][5051] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" host="localhost" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.188 [INFO][5051] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.193 [INFO][5051] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.195 [INFO][5051] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.198 [INFO][5051] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.198 [INFO][5051] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" host="localhost" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.200 [INFO][5051] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8 Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.207 [INFO][5051] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" host="localhost" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.216 [INFO][5051] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" host="localhost" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.216 [INFO][5051] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" host="localhost" Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.216 [INFO][5051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:53:57.239289 containerd[1454]: 2025-01-29 11:53:57.216 [INFO][5051] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" HandleID="k8s-pod-network.51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:53:57.240712 containerd[1454]: 2025-01-29 11:53:57.220 [INFO][5014] cni-plugin/k8s.go 386: Populated endpoint ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-wck8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wck8n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"fd1af382-4da8-46d6-b100-8da54f486a77", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-wck8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4c59bd09cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:53:57.240712 containerd[1454]: 2025-01-29 11:53:57.220 [INFO][5014] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-wck8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:53:57.240712 containerd[1454]: 2025-01-29 11:53:57.220 [INFO][5014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4c59bd09cc ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-wck8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:53:57.240712 containerd[1454]: 2025-01-29 11:53:57.222 [INFO][5014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-wck8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:53:57.240712 containerd[1454]: 2025-01-29 11:53:57.222 [INFO][5014] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-wck8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wck8n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"fd1af382-4da8-46d6-b100-8da54f486a77", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8", Pod:"coredns-6f6b679f8f-wck8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4c59bd09cc", MAC:"8a:01:38:48:55:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:53:57.240712 containerd[1454]: 2025-01-29 11:53:57.232 [INFO][5014] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8" Namespace="kube-system" Pod="coredns-6f6b679f8f-wck8n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:53:57.335206 containerd[1454]: time="2025-01-29T11:53:57.334222899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:53:57.335206 containerd[1454]: time="2025-01-29T11:53:57.334982694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:53:57.335206 containerd[1454]: time="2025-01-29T11:53:57.334997883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:53:57.335206 containerd[1454]: time="2025-01-29T11:53:57.335108924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:53:57.365065 systemd[1]: Started cri-containerd-51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8.scope - libcontainer container 51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8. Jan 29 11:53:57.380987 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:53:57.405985 systemd-networkd[1384]: cali3fdfabe24d4: Gained IPv6LL Jan 29 11:53:57.414363 containerd[1454]: time="2025-01-29T11:53:57.414311311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wck8n,Uid:fd1af382-4da8-46d6-b100-8da54f486a77,Namespace:kube-system,Attempt:1,} returns sandbox id \"51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8\"" Jan 29 11:53:57.415254 kubelet[2506]: E0129 11:53:57.415230 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:57.419503 containerd[1454]: time="2025-01-29T11:53:57.419437108Z" level=info msg="CreateContainer within sandbox \"51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:53:57.493200 systemd-networkd[1384]: vxlan.calico: Link UP Jan 29 11:53:57.493210 systemd-networkd[1384]: vxlan.calico: Gained carrier Jan 29 11:53:57.657752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656404498.mount: Deactivated successfully. Jan 29 11:53:57.673319 containerd[1454]: time="2025-01-29T11:53:57.673265032Z" level=info msg="CreateContainer within sandbox \"51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7044099add2e590eb533bd68718d1065c2af33a94e7f2217c88697eca95da2a1\"" Jan 29 11:53:57.675122 containerd[1454]: time="2025-01-29T11:53:57.673955284Z" level=info msg="StartContainer for \"7044099add2e590eb533bd68718d1065c2af33a94e7f2217c88697eca95da2a1\"" Jan 29 11:53:57.712153 systemd[1]: Started cri-containerd-7044099add2e590eb533bd68718d1065c2af33a94e7f2217c88697eca95da2a1.scope - libcontainer container 7044099add2e590eb533bd68718d1065c2af33a94e7f2217c88697eca95da2a1. Jan 29 11:53:57.823466 containerd[1454]: time="2025-01-29T11:53:57.823314377Z" level=info msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\"" Jan 29 11:53:57.823601 kubelet[2506]: E0129 11:53:57.823396 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:58.011202 containerd[1454]: time="2025-01-29T11:53:58.011119960Z" level=info msg="StartContainer for \"7044099add2e590eb533bd68718d1065c2af33a94e7f2217c88697eca95da2a1\" returns successfully" Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.071 [INFO][5224] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.079 [INFO][5224] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" iface="eth0" netns="/var/run/netns/cni-56433cf5-a879-7eb4-d9c8-b62f1096da20" Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.079 [INFO][5224] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" iface="eth0" netns="/var/run/netns/cni-56433cf5-a879-7eb4-d9c8-b62f1096da20" Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.079 [INFO][5224] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" iface="eth0" netns="/var/run/netns/cni-56433cf5-a879-7eb4-d9c8-b62f1096da20" Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.079 [INFO][5224] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.111 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.137 [INFO][5253] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" HandleID="k8s-pod-network.9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.138 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.138 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.253 [WARNING][5253] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" HandleID="k8s-pod-network.9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.253 [INFO][5253] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" HandleID="k8s-pod-network.9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.255 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:53:58.262271 containerd[1454]: 2025-01-29 11:53:58.258 [INFO][5224] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:53:58.262931 containerd[1454]: time="2025-01-29T11:53:58.262456223Z" level=info msg="TearDown network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\" successfully" Jan 29 11:53:58.262931 containerd[1454]: time="2025-01-29T11:53:58.262485780Z" level=info msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\" returns successfully" Jan 29 11:53:58.263278 containerd[1454]: time="2025-01-29T11:53:58.263246947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d8d589cb-k69s4,Uid:3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a,Namespace:calico-system,Attempt:1,}" Jan 29 11:53:58.340522 systemd[1]: run-netns-cni\x2d56433cf5\x2da879\x2d7eb4\x2dd9c8\x2db62f1096da20.mount: Deactivated successfully. Jan 29 11:53:58.364961 systemd-networkd[1384]: calie4c59bd09cc: Gained IPv6LL Jan 29 11:53:58.428618 kubelet[2506]: E0129 11:53:58.428576 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:58.449554 kubelet[2506]: I0129 11:53:58.449470 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wck8n" podStartSLOduration=74.449445991 podStartE2EDuration="1m14.449445991s" podCreationTimestamp="2025-01-29 11:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:53:58.448700363 +0000 UTC m=+80.722040602" watchObservedRunningTime="2025-01-29 11:53:58.449445991 +0000 UTC m=+80.722786220" Jan 29 11:53:58.621198 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL Jan 29 11:53:58.719440 systemd-networkd[1384]: cali197fc212388: Link UP Jan 29 11:53:58.720974 systemd-networkd[1384]: cali197fc212388: Gained carrier Jan 29 11:53:58.822819 kubelet[2506]: E0129 11:53:58.822758 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.528 [INFO][5268] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0 calico-kube-controllers-65d8d589cb- calico-system 3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a 1122 0 2025-01-29 11:52:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65d8d589cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-65d8d589cb-k69s4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali197fc212388 [] []}} ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Namespace="calico-system" Pod="calico-kube-controllers-65d8d589cb-k69s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.528 [INFO][5268] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Namespace="calico-system" Pod="calico-kube-controllers-65d8d589cb-k69s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.572 [INFO][5281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" HandleID="k8s-pod-network.45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.631 [INFO][5281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" HandleID="k8s-pod-network.45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dc610), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-65d8d589cb-k69s4", "timestamp":"2025-01-29 11:53:58.572712016 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.631 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.631 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.631 [INFO][5281] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.634 [INFO][5281] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" host="localhost" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.638 [INFO][5281] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.644 [INFO][5281] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.646 [INFO][5281] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.650 [INFO][5281] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.651 [INFO][5281] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" host="localhost" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.653 [INFO][5281] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475 Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.690 [INFO][5281] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" host="localhost" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.710 [INFO][5281] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" host="localhost" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.710 [INFO][5281] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" host="localhost" Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.710 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:53:58.851533 containerd[1454]: 2025-01-29 11:53:58.711 [INFO][5281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" HandleID="k8s-pod-network.45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:53:58.852430 containerd[1454]: 2025-01-29 11:53:58.716 [INFO][5268] cni-plugin/k8s.go 386: Populated endpoint ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Namespace="calico-system" Pod="calico-kube-controllers-65d8d589cb-k69s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0", GenerateName:"calico-kube-controllers-65d8d589cb-", Namespace:"calico-system", SelfLink:"", UID:"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d8d589cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-65d8d589cb-k69s4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali197fc212388", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:53:58.852430 containerd[1454]: 2025-01-29 11:53:58.716 [INFO][5268] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Namespace="calico-system" Pod="calico-kube-controllers-65d8d589cb-k69s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:53:58.852430 containerd[1454]: 2025-01-29 11:53:58.716 [INFO][5268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali197fc212388 ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Namespace="calico-system" Pod="calico-kube-controllers-65d8d589cb-k69s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:53:58.852430 containerd[1454]: 2025-01-29 11:53:58.719 [INFO][5268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Namespace="calico-system" Pod="calico-kube-controllers-65d8d589cb-k69s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:53:58.852430 containerd[1454]: 2025-01-29 11:53:58.720 [INFO][5268] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Namespace="calico-system" Pod="calico-kube-controllers-65d8d589cb-k69s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0", GenerateName:"calico-kube-controllers-65d8d589cb-", Namespace:"calico-system", SelfLink:"", UID:"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d8d589cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475", Pod:"calico-kube-controllers-65d8d589cb-k69s4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali197fc212388", MAC:"7e:ce:43:2c:b8:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:53:58.852430 containerd[1454]: 2025-01-29 11:53:58.847 [INFO][5268] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475" Namespace="calico-system" Pod="calico-kube-controllers-65d8d589cb-k69s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:53:58.911266 containerd[1454]: time="2025-01-29T11:53:58.911139430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:53:58.911266 containerd[1454]: time="2025-01-29T11:53:58.911227097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:53:58.911266 containerd[1454]: time="2025-01-29T11:53:58.911242245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:53:58.911680 containerd[1454]: time="2025-01-29T11:53:58.911353056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:53:58.946998 systemd[1]: Started cri-containerd-45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475.scope - libcontainer container 45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475. Jan 29 11:53:58.963412 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:53:58.998924 containerd[1454]: time="2025-01-29T11:53:58.998868844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d8d589cb-k69s4,Uid:3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a,Namespace:calico-system,Attempt:1,} returns sandbox id \"45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475\"" Jan 29 11:53:59.296012 containerd[1454]: time="2025-01-29T11:53:59.295861088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:53:59.300947 containerd[1454]: time="2025-01-29T11:53:59.300896888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:53:59.302559 containerd[1454]: time="2025-01-29T11:53:59.302384646Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:53:59.305150 containerd[1454]: time="2025-01-29T11:53:59.305078905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:53:59.305869 containerd[1454]: time="2025-01-29T11:53:59.305840362Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 3.101734081s" Jan 29 11:53:59.305920 containerd[1454]: time="2025-01-29T11:53:59.305872674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:53:59.307296 containerd[1454]: time="2025-01-29T11:53:59.307255902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:53:59.308193 containerd[1454]: time="2025-01-29T11:53:59.308144510Z" level=info msg="CreateContainer within sandbox \"3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:53:59.335021 containerd[1454]: time="2025-01-29T11:53:59.334932755Z" level=info msg="CreateContainer within sandbox \"3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"45e5dd05f1fd1eb2da66c38b5191abb0bc083dbfd6b4330b6cdbfc82c4e79f0a\"" Jan 29 11:53:59.335688 containerd[1454]: time="2025-01-29T11:53:59.335642383Z" level=info msg="StartContainer for \"45e5dd05f1fd1eb2da66c38b5191abb0bc083dbfd6b4330b6cdbfc82c4e79f0a\"" Jan 29 11:53:59.373124 systemd[1]: Started cri-containerd-45e5dd05f1fd1eb2da66c38b5191abb0bc083dbfd6b4330b6cdbfc82c4e79f0a.scope - libcontainer container 45e5dd05f1fd1eb2da66c38b5191abb0bc083dbfd6b4330b6cdbfc82c4e79f0a. Jan 29 11:53:59.438492 containerd[1454]: time="2025-01-29T11:53:59.438433023Z" level=info msg="StartContainer for \"45e5dd05f1fd1eb2da66c38b5191abb0bc083dbfd6b4330b6cdbfc82c4e79f0a\" returns successfully" Jan 29 11:53:59.445313 kubelet[2506]: E0129 11:53:59.445261 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:00.449955 kubelet[2506]: E0129 11:54:00.447553 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:00.605100 systemd-networkd[1384]: cali197fc212388: Gained IPv6LL Jan 29 11:54:01.241468 systemd[1]: Started sshd@21-10.0.0.52:22-10.0.0.1:47992.service - OpenSSH per-connection server daemon (10.0.0.1:47992). Jan 29 11:54:01.291824 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 47992 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:01.294079 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:01.301006 systemd-logind[1439]: New session 22 of user core. Jan 29 11:54:01.310082 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:54:01.519240 sshd[5391]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:01.524510 systemd[1]: sshd@21-10.0.0.52:22-10.0.0.1:47992.service: Deactivated successfully. Jan 29 11:54:01.527227 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:54:01.528000 systemd-logind[1439]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:54:01.529381 systemd-logind[1439]: Removed session 22. Jan 29 11:54:01.799578 containerd[1454]: time="2025-01-29T11:54:01.799412926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:01.800580 containerd[1454]: time="2025-01-29T11:54:01.800530117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:54:01.801989 containerd[1454]: time="2025-01-29T11:54:01.801924586Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:01.804089 containerd[1454]: time="2025-01-29T11:54:01.804041917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:01.804713 containerd[1454]: time="2025-01-29T11:54:01.804657746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.497370465s" Jan 29 11:54:01.804713 containerd[1454]: time="2025-01-29T11:54:01.804699987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:54:01.805728 containerd[1454]: time="2025-01-29T11:54:01.805696238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:54:01.812665 containerd[1454]: time="2025-01-29T11:54:01.812632310Z" level=info msg="CreateContainer within sandbox \"45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:54:01.833473 containerd[1454]: time="2025-01-29T11:54:01.833400239Z" level=info msg="CreateContainer within sandbox \"45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0b3d9ae666700d37ac742176855cabbec4f8bc65841a016405fd96b9b578fe6f\"" Jan 29 11:54:01.834064 containerd[1454]: time="2025-01-29T11:54:01.834027250Z" level=info msg="StartContainer for \"0b3d9ae666700d37ac742176855cabbec4f8bc65841a016405fd96b9b578fe6f\"" Jan 29 11:54:01.875974 systemd[1]: Started cri-containerd-0b3d9ae666700d37ac742176855cabbec4f8bc65841a016405fd96b9b578fe6f.scope - libcontainer container 0b3d9ae666700d37ac742176855cabbec4f8bc65841a016405fd96b9b578fe6f. Jan 29 11:54:01.983938 containerd[1454]: time="2025-01-29T11:54:01.983846797Z" level=info msg="StartContainer for \"0b3d9ae666700d37ac742176855cabbec4f8bc65841a016405fd96b9b578fe6f\" returns successfully" Jan 29 11:54:02.563947 kubelet[2506]: I0129 11:54:02.563659 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65d8d589cb-k69s4" podStartSLOduration=69.759024709 podStartE2EDuration="1m12.563637834s" podCreationTimestamp="2025-01-29 11:52:50 +0000 UTC" firstStartedPulling="2025-01-29 11:53:59.000946243 +0000 UTC m=+81.274286472" lastFinishedPulling="2025-01-29 11:54:01.805559368 +0000 UTC m=+84.078899597" observedRunningTime="2025-01-29 11:54:02.550975755 +0000 UTC m=+84.824315984" watchObservedRunningTime="2025-01-29 11:54:02.563637834 +0000 UTC m=+84.836978063" Jan 29 11:54:03.612220 containerd[1454]: time="2025-01-29T11:54:03.612161507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:03.613029 containerd[1454]: time="2025-01-29T11:54:03.612970833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:54:03.614213 containerd[1454]: time="2025-01-29T11:54:03.614169398Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:03.616485 containerd[1454]: time="2025-01-29T11:54:03.616432733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:03.617485 containerd[1454]: time="2025-01-29T11:54:03.617433042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.811706987s" Jan 29 11:54:03.617485 containerd[1454]: time="2025-01-29T11:54:03.617472176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:54:03.620218 containerd[1454]: time="2025-01-29T11:54:03.620189193Z" level=info msg="CreateContainer within sandbox \"3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:54:03.638167 containerd[1454]: time="2025-01-29T11:54:03.638100618Z" level=info msg="CreateContainer within sandbox \"3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e69d69f2edcd9a9d58ad0dd4a44dd91902c4f8f45bf9218ae1bc9c32e92d8b99\"" Jan 29 11:54:03.638689 containerd[1454]: time="2025-01-29T11:54:03.638635312Z" level=info msg="StartContainer for \"e69d69f2edcd9a9d58ad0dd4a44dd91902c4f8f45bf9218ae1bc9c32e92d8b99\"" Jan 29 11:54:03.683080 systemd[1]: Started cri-containerd-e69d69f2edcd9a9d58ad0dd4a44dd91902c4f8f45bf9218ae1bc9c32e92d8b99.scope - libcontainer container e69d69f2edcd9a9d58ad0dd4a44dd91902c4f8f45bf9218ae1bc9c32e92d8b99. Jan 29 11:54:03.716589 containerd[1454]: time="2025-01-29T11:54:03.716514031Z" level=info msg="StartContainer for \"e69d69f2edcd9a9d58ad0dd4a44dd91902c4f8f45bf9218ae1bc9c32e92d8b99\" returns successfully" Jan 29 11:54:03.823179 containerd[1454]: time="2025-01-29T11:54:03.822971681Z" level=info msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\"" Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.896 [INFO][5529] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.897 [INFO][5529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" iface="eth0" netns="/var/run/netns/cni-88d5cfa2-a65b-8597-bc45-9756a69d2392" Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.897 [INFO][5529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" iface="eth0" netns="/var/run/netns/cni-88d5cfa2-a65b-8597-bc45-9756a69d2392" Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.897 [INFO][5529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" iface="eth0" netns="/var/run/netns/cni-88d5cfa2-a65b-8597-bc45-9756a69d2392" Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.897 [INFO][5529] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.897 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.924 [INFO][5539] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" HandleID="k8s-pod-network.5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.924 [INFO][5539] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.924 [INFO][5539] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.930 [WARNING][5539] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" HandleID="k8s-pod-network.5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.930 [INFO][5539] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" HandleID="k8s-pod-network.5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.932 [INFO][5539] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:03.938480 containerd[1454]: 2025-01-29 11:54:03.935 [INFO][5529] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:03.939256 containerd[1454]: time="2025-01-29T11:54:03.938914353Z" level=info msg="TearDown network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" successfully" Jan 29 11:54:03.939256 containerd[1454]: time="2025-01-29T11:54:03.938952275Z" level=info msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" returns successfully" Jan 29 11:54:03.939370 kubelet[2506]: E0129 11:54:03.939347 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:03.940209 containerd[1454]: time="2025-01-29T11:54:03.940174033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfp24,Uid:92902e69-b7b9-4835-bfaf-552ffb2affbd,Namespace:kube-system,Attempt:1,}" Jan 29 11:54:03.942980 systemd[1]: run-netns-cni\x2d88d5cfa2\x2da65b\x2d8597\x2dbc45\x2d9756a69d2392.mount: Deactivated successfully. Jan 29 11:54:04.067882 systemd-networkd[1384]: calie57df460d32: Link UP Jan 29 11:54:04.068146 systemd-networkd[1384]: calie57df460d32: Gained carrier Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:03.991 [INFO][5548] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--hfp24-eth0 coredns-6f6b679f8f- kube-system 92902e69-b7b9-4835-bfaf-552ffb2affbd 1184 0 2025-01-29 11:52:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-hfp24 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie57df460d32 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfp24" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hfp24-" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:03.991 [INFO][5548] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfp24" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.024 [INFO][5561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" HandleID="k8s-pod-network.7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.033 [INFO][5561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" HandleID="k8s-pod-network.7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050e00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-hfp24", "timestamp":"2025-01-29 11:54:04.024051004 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.033 [INFO][5561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.033 [INFO][5561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.033 [INFO][5561] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.035 [INFO][5561] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" host="localhost" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.039 [INFO][5561] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.044 [INFO][5561] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.046 [INFO][5561] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.049 [INFO][5561] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.049 [INFO][5561] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" host="localhost" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.050 [INFO][5561] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1 Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.056 [INFO][5561] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" host="localhost" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.061 [INFO][5561] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" host="localhost" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.061 [INFO][5561] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" host="localhost" Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.061 [INFO][5561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:04.081924 containerd[1454]: 2025-01-29 11:54:04.061 [INFO][5561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" HandleID="k8s-pod-network.7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:04.083009 containerd[1454]: 2025-01-29 11:54:04.065 [INFO][5548] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfp24" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hfp24-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"92902e69-b7b9-4835-bfaf-552ffb2affbd", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-hfp24", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie57df460d32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:04.083009 containerd[1454]: 2025-01-29 11:54:04.065 [INFO][5548] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfp24" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:04.083009 containerd[1454]: 2025-01-29 11:54:04.065 [INFO][5548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie57df460d32 ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfp24" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:04.083009 containerd[1454]: 2025-01-29 11:54:04.068 [INFO][5548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfp24" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:04.083009 containerd[1454]: 2025-01-29 11:54:04.069 [INFO][5548] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfp24" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hfp24-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"92902e69-b7b9-4835-bfaf-552ffb2affbd", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1", Pod:"coredns-6f6b679f8f-hfp24", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie57df460d32", MAC:"fa:11:64:6a:39:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:04.083009 containerd[1454]: 2025-01-29 11:54:04.077 [INFO][5548] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1" Namespace="kube-system" Pod="coredns-6f6b679f8f-hfp24" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:04.110997 containerd[1454]: time="2025-01-29T11:54:04.110841570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:04.110997 containerd[1454]: time="2025-01-29T11:54:04.110923305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:04.110997 containerd[1454]: time="2025-01-29T11:54:04.110940908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:04.111336 containerd[1454]: time="2025-01-29T11:54:04.111065434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:04.138980 systemd[1]: Started cri-containerd-7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1.scope - libcontainer container 7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1. Jan 29 11:54:04.153941 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:54:04.183586 containerd[1454]: time="2025-01-29T11:54:04.183534257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hfp24,Uid:92902e69-b7b9-4835-bfaf-552ffb2affbd,Namespace:kube-system,Attempt:1,} returns sandbox id \"7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1\"" Jan 29 11:54:04.184939 kubelet[2506]: E0129 11:54:04.184903 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:04.187822 containerd[1454]: time="2025-01-29T11:54:04.187732874Z" level=info msg="CreateContainer within sandbox \"7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:54:04.203271 containerd[1454]: time="2025-01-29T11:54:04.203100122Z" level=info msg="CreateContainer within sandbox \"7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"340cdb859dda6289fff2fb0e9105eaf98c5c8460262927b42ec2f8d50b42424b\"" Jan 29 11:54:04.204277 containerd[1454]: time="2025-01-29T11:54:04.204209546Z" level=info msg="StartContainer for \"340cdb859dda6289fff2fb0e9105eaf98c5c8460262927b42ec2f8d50b42424b\"" Jan 29 11:54:04.235961 systemd[1]: Started cri-containerd-340cdb859dda6289fff2fb0e9105eaf98c5c8460262927b42ec2f8d50b42424b.scope - libcontainer container 340cdb859dda6289fff2fb0e9105eaf98c5c8460262927b42ec2f8d50b42424b. Jan 29 11:54:04.266044 kubelet[2506]: I0129 11:54:04.265599 2506 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:54:04.266044 kubelet[2506]: I0129 11:54:04.265674 2506 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:54:04.266198 containerd[1454]: time="2025-01-29T11:54:04.265924063Z" level=info msg="StartContainer for \"340cdb859dda6289fff2fb0e9105eaf98c5c8460262927b42ec2f8d50b42424b\" returns successfully" Jan 29 11:54:04.466165 kubelet[2506]: E0129 11:54:04.466025 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:04.477449 kubelet[2506]: I0129 11:54:04.476270 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-m4sk6" podStartSLOduration=67.061354294 podStartE2EDuration="1m14.476247941s" podCreationTimestamp="2025-01-29 11:52:50 +0000 UTC" firstStartedPulling="2025-01-29 11:53:56.203381443 +0000 UTC m=+78.476721672" lastFinishedPulling="2025-01-29 11:54:03.61827509 +0000 UTC m=+85.891615319" observedRunningTime="2025-01-29 11:54:04.475126564 +0000 UTC m=+86.748466803" watchObservedRunningTime="2025-01-29 11:54:04.476247941 +0000 UTC m=+86.749588170" Jan 29 11:54:05.468277 kubelet[2506]: E0129 11:54:05.468218 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:05.533039 systemd-networkd[1384]: calie57df460d32: Gained IPv6LL Jan 29 11:54:06.470136 kubelet[2506]: E0129 11:54:06.470092 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:06.533212 systemd[1]: Started sshd@22-10.0.0.52:22-10.0.0.1:48004.service - OpenSSH per-connection server daemon (10.0.0.1:48004). Jan 29 11:54:06.580952 sshd[5670]: Accepted publickey for core from 10.0.0.1 port 48004 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:06.583243 sshd[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:06.587514 systemd-logind[1439]: New session 23 of user core. Jan 29 11:54:06.597954 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:54:06.739937 sshd[5670]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:06.744494 systemd[1]: sshd@22-10.0.0.52:22-10.0.0.1:48004.service: Deactivated successfully. Jan 29 11:54:06.746586 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:54:06.747242 systemd-logind[1439]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:54:06.748215 systemd-logind[1439]: Removed session 23. Jan 29 11:54:06.822119 kubelet[2506]: E0129 11:54:06.821997 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:06.822348 containerd[1454]: time="2025-01-29T11:54:06.822241222Z" level=info msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\"" Jan 29 11:54:06.822833 containerd[1454]: time="2025-01-29T11:54:06.822486797Z" level=info msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\"" Jan 29 11:54:06.879171 kubelet[2506]: I0129 11:54:06.879084 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hfp24" podStartSLOduration=82.879066032 podStartE2EDuration="1m22.879066032s" podCreationTimestamp="2025-01-29 11:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:54:04.487953251 +0000 UTC m=+86.761293500" watchObservedRunningTime="2025-01-29 11:54:06.879066032 +0000 UTC m=+89.152406261" Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.886 [INFO][5721] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.886 [INFO][5721] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" iface="eth0" netns="/var/run/netns/cni-2a6ecc26-4874-c064-4a0e-b411f3b10cc6" Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.887 [INFO][5721] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" iface="eth0" netns="/var/run/netns/cni-2a6ecc26-4874-c064-4a0e-b411f3b10cc6" Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.887 [INFO][5721] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" iface="eth0" netns="/var/run/netns/cni-2a6ecc26-4874-c064-4a0e-b411f3b10cc6" Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.887 [INFO][5721] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.887 [INFO][5721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.913 [INFO][5736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" HandleID="k8s-pod-network.34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.913 [INFO][5736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.913 [INFO][5736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.921 [WARNING][5736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" HandleID="k8s-pod-network.34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.921 [INFO][5736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" HandleID="k8s-pod-network.34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.923 [INFO][5736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:06.928973 containerd[1454]: 2025-01-29 11:54:06.926 [INFO][5721] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:06.930604 containerd[1454]: time="2025-01-29T11:54:06.929200286Z" level=info msg="TearDown network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" successfully" Jan 29 11:54:06.930604 containerd[1454]: time="2025-01-29T11:54:06.929228289Z" level=info msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" returns successfully" Jan 29 11:54:06.930604 containerd[1454]: time="2025-01-29T11:54:06.930215912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f474569cb-m47z4,Uid:bdf8891f-6b1d-4211-be98-e56b0b0de0ad,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:54:06.932661 systemd[1]: run-netns-cni\x2d2a6ecc26\x2d4874\x2dc064\x2d4a0e\x2db411f3b10cc6.mount: Deactivated successfully. Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.878 [INFO][5716] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.879 [INFO][5716] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" iface="eth0" netns="/var/run/netns/cni-9f966bbd-6327-0f27-ed4d-02460662a4ea" Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.879 [INFO][5716] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" iface="eth0" netns="/var/run/netns/cni-9f966bbd-6327-0f27-ed4d-02460662a4ea" Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.880 [INFO][5716] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" iface="eth0" netns="/var/run/netns/cni-9f966bbd-6327-0f27-ed4d-02460662a4ea" Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.880 [INFO][5716] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.880 [INFO][5716] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.920 [INFO][5731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" HandleID="k8s-pod-network.133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.920 [INFO][5731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.923 [INFO][5731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.928 [WARNING][5731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" HandleID="k8s-pod-network.133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.928 [INFO][5731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" HandleID="k8s-pod-network.133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.931 [INFO][5731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:06.939948 containerd[1454]: 2025-01-29 11:54:06.937 [INFO][5716] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:06.940273 containerd[1454]: time="2025-01-29T11:54:06.940242946Z" level=info msg="TearDown network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" successfully" Jan 29 11:54:06.940273 containerd[1454]: time="2025-01-29T11:54:06.940269347Z" level=info msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" returns successfully" Jan 29 11:54:06.941105 containerd[1454]: time="2025-01-29T11:54:06.941070446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f474569cb-q4cqv,Uid:b6dfda52-b36b-4860-a295-437d50d36570,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:54:06.944839 systemd[1]: run-netns-cni\x2d9f966bbd\x2d6327\x2d0f27\x2ded4d\x2d02460662a4ea.mount: Deactivated successfully. Jan 29 11:54:07.273216 systemd-networkd[1384]: cali4f3ab3f7321: Link UP Jan 29 11:54:07.273712 systemd-networkd[1384]: cali4f3ab3f7321: Gained carrier Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.137 [INFO][5747] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0 calico-apiserver-5f474569cb- calico-apiserver bdf8891f-6b1d-4211-be98-e56b0b0de0ad 1223 0 2025-01-29 11:52:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f474569cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f474569cb-m47z4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f3ab3f7321 [] []}} ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-m47z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--m47z4-" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.137 [INFO][5747] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-m47z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.179 [INFO][5760] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" HandleID="k8s-pod-network.e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.186 [INFO][5760] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" HandleID="k8s-pod-network.e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f474569cb-m47z4", "timestamp":"2025-01-29 11:54:07.179450868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.186 [INFO][5760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.186 [INFO][5760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.186 [INFO][5760] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.188 [INFO][5760] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" host="localhost" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.194 [INFO][5760] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.198 [INFO][5760] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.200 [INFO][5760] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.202 [INFO][5760] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.202 [INFO][5760] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" host="localhost" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.204 [INFO][5760] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.219 [INFO][5760] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" host="localhost" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.268 [INFO][5760] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" host="localhost" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.268 [INFO][5760] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" host="localhost" Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.268 [INFO][5760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:07.310854 containerd[1454]: 2025-01-29 11:54:07.268 [INFO][5760] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" HandleID="k8s-pod-network.e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:07.311559 containerd[1454]: 2025-01-29 11:54:07.271 [INFO][5747] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-m47z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0", GenerateName:"calico-apiserver-5f474569cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bdf8891f-6b1d-4211-be98-e56b0b0de0ad", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f474569cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f474569cb-m47z4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f3ab3f7321", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:07.311559 containerd[1454]: 2025-01-29 11:54:07.271 [INFO][5747] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-m47z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:07.311559 containerd[1454]: 2025-01-29 11:54:07.271 [INFO][5747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f3ab3f7321 ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-m47z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:07.311559 containerd[1454]: 2025-01-29 11:54:07.274 [INFO][5747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-m47z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:07.311559 containerd[1454]: 2025-01-29 11:54:07.274 [INFO][5747] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-m47z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0", GenerateName:"calico-apiserver-5f474569cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bdf8891f-6b1d-4211-be98-e56b0b0de0ad", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f474569cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d", Pod:"calico-apiserver-5f474569cb-m47z4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f3ab3f7321", MAC:"ca:5d:a1:87:8d:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:07.311559 containerd[1454]: 2025-01-29 11:54:07.307 [INFO][5747] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-m47z4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:07.372854 containerd[1454]: time="2025-01-29T11:54:07.372602220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:07.372854 containerd[1454]: time="2025-01-29T11:54:07.372691509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:07.372854 containerd[1454]: time="2025-01-29T11:54:07.372707470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:07.372854 containerd[1454]: time="2025-01-29T11:54:07.372835011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:07.397073 systemd[1]: Started cri-containerd-e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d.scope - libcontainer container e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d. Jan 29 11:54:07.415950 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:54:07.430307 systemd-networkd[1384]: cali6af79a63272: Link UP Jan 29 11:54:07.431558 systemd-networkd[1384]: cali6af79a63272: Gained carrier Jan 29 11:54:07.447512 containerd[1454]: time="2025-01-29T11:54:07.447453750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f474569cb-m47z4,Uid:bdf8891f-6b1d-4211-be98-e56b0b0de0ad,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d\"" Jan 29 11:54:07.449311 containerd[1454]: time="2025-01-29T11:54:07.449273621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.183 [INFO][5765] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0 calico-apiserver-5f474569cb- calico-apiserver b6dfda52-b36b-4860-a295-437d50d36570 1222 0 2025-01-29 11:52:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f474569cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f474569cb-q4cqv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6af79a63272 [] []}} ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-q4cqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.184 [INFO][5765] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-q4cqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.213 [INFO][5782] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" HandleID="k8s-pod-network.d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.388 [INFO][5782] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" HandleID="k8s-pod-network.d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f474569cb-q4cqv", "timestamp":"2025-01-29 11:54:07.213273812 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.388 [INFO][5782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.388 [INFO][5782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.388 [INFO][5782] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.391 [INFO][5782] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" host="localhost" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.396 [INFO][5782] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.402 [INFO][5782] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.405 [INFO][5782] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.408 [INFO][5782] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.408 [INFO][5782] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" host="localhost" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.410 [INFO][5782] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.414 [INFO][5782] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" host="localhost" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.423 [INFO][5782] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" host="localhost" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.424 [INFO][5782] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" host="localhost" Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.424 [INFO][5782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:07.502381 containerd[1454]: 2025-01-29 11:54:07.424 [INFO][5782] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" HandleID="k8s-pod-network.d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:07.503137 containerd[1454]: 2025-01-29 11:54:07.427 [INFO][5765] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-q4cqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0", GenerateName:"calico-apiserver-5f474569cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6dfda52-b36b-4860-a295-437d50d36570", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f474569cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f474569cb-q4cqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6af79a63272", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:07.503137 containerd[1454]: 2025-01-29 11:54:07.427 [INFO][5765] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-q4cqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:07.503137 containerd[1454]: 2025-01-29 11:54:07.427 [INFO][5765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6af79a63272 ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-q4cqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:07.503137 containerd[1454]: 2025-01-29 11:54:07.434 [INFO][5765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-q4cqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:07.503137 containerd[1454]: 2025-01-29 11:54:07.435 [INFO][5765] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-q4cqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0", GenerateName:"calico-apiserver-5f474569cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6dfda52-b36b-4860-a295-437d50d36570", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f474569cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff", Pod:"calico-apiserver-5f474569cb-q4cqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6af79a63272", MAC:"1e:74:d3:1e:ad:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:07.503137 containerd[1454]: 2025-01-29 11:54:07.498 [INFO][5765] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff" Namespace="calico-apiserver" Pod="calico-apiserver-5f474569cb-q4cqv" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:07.528887 containerd[1454]: time="2025-01-29T11:54:07.528669736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:54:07.528887 containerd[1454]: time="2025-01-29T11:54:07.528740751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:54:07.528887 containerd[1454]: time="2025-01-29T11:54:07.528755178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:07.531719 containerd[1454]: time="2025-01-29T11:54:07.528895454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:54:07.554290 systemd[1]: Started cri-containerd-d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff.scope - libcontainer container d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff. Jan 29 11:54:07.571384 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:54:07.602808 containerd[1454]: time="2025-01-29T11:54:07.602736539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f474569cb-q4cqv,Uid:b6dfda52-b36b-4860-a295-437d50d36570,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff\"" Jan 29 11:54:09.181135 systemd-networkd[1384]: cali4f3ab3f7321: Gained IPv6LL Jan 29 11:54:09.308972 systemd-networkd[1384]: cali6af79a63272: Gained IPv6LL Jan 29 11:54:10.797271 containerd[1454]: time="2025-01-29T11:54:10.797186790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:10.798079 containerd[1454]: time="2025-01-29T11:54:10.798005030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:54:10.799614 containerd[1454]: time="2025-01-29T11:54:10.799565577Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:10.802703 containerd[1454]: time="2025-01-29T11:54:10.802670619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:10.803540 containerd[1454]: time="2025-01-29T11:54:10.803483630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.354175535s" Jan 29 11:54:10.803540 containerd[1454]: time="2025-01-29T11:54:10.803539495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:54:10.804624 containerd[1454]: time="2025-01-29T11:54:10.804604573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:54:10.806065 containerd[1454]: time="2025-01-29T11:54:10.806039191Z" level=info msg="CreateContainer within sandbox \"e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:54:10.824402 containerd[1454]: time="2025-01-29T11:54:10.824201380Z" level=info msg="CreateContainer within sandbox \"e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"83ce843151bf93d9277006cc240a87a59746116c5101b8fb35b7b2479f9f7b09\"" Jan 29 11:54:10.825253 containerd[1454]: time="2025-01-29T11:54:10.825155808Z" level=info msg="StartContainer for \"83ce843151bf93d9277006cc240a87a59746116c5101b8fb35b7b2479f9f7b09\"" Jan 29 11:54:10.901032 systemd[1]: Started cri-containerd-83ce843151bf93d9277006cc240a87a59746116c5101b8fb35b7b2479f9f7b09.scope - libcontainer container 83ce843151bf93d9277006cc240a87a59746116c5101b8fb35b7b2479f9f7b09. Jan 29 11:54:11.044223 containerd[1454]: time="2025-01-29T11:54:11.044063462Z" level=info msg="StartContainer for \"83ce843151bf93d9277006cc240a87a59746116c5101b8fb35b7b2479f9f7b09\" returns successfully" Jan 29 11:54:11.573163 kubelet[2506]: I0129 11:54:11.571995 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f474569cb-m47z4" podStartSLOduration=77.216311455 podStartE2EDuration="1m20.571974979s" podCreationTimestamp="2025-01-29 11:52:51 +0000 UTC" firstStartedPulling="2025-01-29 11:54:07.448765347 +0000 UTC m=+89.722105576" lastFinishedPulling="2025-01-29 11:54:10.804428871 +0000 UTC m=+93.077769100" observedRunningTime="2025-01-29 11:54:11.569891823 +0000 UTC m=+93.843232052" watchObservedRunningTime="2025-01-29 11:54:11.571974979 +0000 UTC m=+93.845315198" Jan 29 11:54:11.636484 containerd[1454]: time="2025-01-29T11:54:11.636380528Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:54:11.665424 containerd[1454]: time="2025-01-29T11:54:11.665335472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:54:11.668951 containerd[1454]: time="2025-01-29T11:54:11.668387502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 863.686326ms" Jan 29 11:54:11.668951 containerd[1454]: time="2025-01-29T11:54:11.668439280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:54:11.673596 containerd[1454]: time="2025-01-29T11:54:11.673556433Z" level=info msg="CreateContainer within sandbox \"d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:54:11.741268 containerd[1454]: time="2025-01-29T11:54:11.741206046Z" level=info msg="CreateContainer within sandbox \"d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5371e0018598756a55c658c282d3049b0e84398d76cc4e6ba47da952cdbbe597\"" Jan 29 11:54:11.742335 containerd[1454]: time="2025-01-29T11:54:11.742189659Z" level=info msg="StartContainer for \"5371e0018598756a55c658c282d3049b0e84398d76cc4e6ba47da952cdbbe597\"" Jan 29 11:54:11.761669 systemd[1]: Started sshd@23-10.0.0.52:22-10.0.0.1:42060.service - OpenSSH per-connection server daemon (10.0.0.1:42060). Jan 29 11:54:11.775472 systemd[1]: Started cri-containerd-5371e0018598756a55c658c282d3049b0e84398d76cc4e6ba47da952cdbbe597.scope - libcontainer container 5371e0018598756a55c658c282d3049b0e84398d76cc4e6ba47da952cdbbe597. Jan 29 11:54:11.830012 sshd[5963]: Accepted publickey for core from 10.0.0.1 port 42060 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:11.832612 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:11.841924 systemd-logind[1439]: New session 24 of user core. Jan 29 11:54:11.845177 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:54:11.889051 containerd[1454]: time="2025-01-29T11:54:11.888973620Z" level=info msg="StartContainer for \"5371e0018598756a55c658c282d3049b0e84398d76cc4e6ba47da952cdbbe597\" returns successfully" Jan 29 11:54:12.003688 sshd[5963]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:12.008884 systemd[1]: sshd@23-10.0.0.52:22-10.0.0.1:42060.service: Deactivated successfully. Jan 29 11:54:12.012616 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:54:12.013624 systemd-logind[1439]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:54:12.015583 systemd-logind[1439]: Removed session 24. Jan 29 11:54:12.503562 kubelet[2506]: I0129 11:54:12.503443 2506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f474569cb-q4cqv" podStartSLOduration=77.435776947 podStartE2EDuration="1m21.503414451s" podCreationTimestamp="2025-01-29 11:52:51 +0000 UTC" firstStartedPulling="2025-01-29 11:54:07.604094954 +0000 UTC m=+89.877435183" lastFinishedPulling="2025-01-29 11:54:11.671732468 +0000 UTC m=+93.945072687" observedRunningTime="2025-01-29 11:54:12.502376785 +0000 UTC m=+94.775717014" watchObservedRunningTime="2025-01-29 11:54:12.503414451 +0000 UTC m=+94.776754690" Jan 29 11:54:17.015890 systemd[1]: Started sshd@24-10.0.0.52:22-10.0.0.1:42062.service - OpenSSH per-connection server daemon (10.0.0.1:42062). Jan 29 11:54:17.087714 sshd[6017]: Accepted publickey for core from 10.0.0.1 port 42062 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:17.089586 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:17.093761 systemd-logind[1439]: New session 25 of user core. Jan 29 11:54:17.100922 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:54:17.235920 sshd[6017]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:17.247349 systemd[1]: sshd@24-10.0.0.52:22-10.0.0.1:42062.service: Deactivated successfully. Jan 29 11:54:17.249601 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:54:17.251973 systemd-logind[1439]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:54:17.261222 systemd[1]: Started sshd@25-10.0.0.52:22-10.0.0.1:42078.service - OpenSSH per-connection server daemon (10.0.0.1:42078). Jan 29 11:54:17.262484 systemd-logind[1439]: Removed session 25. Jan 29 11:54:17.295520 sshd[6031]: Accepted publickey for core from 10.0.0.1 port 42078 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:17.297634 sshd[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:17.302347 systemd-logind[1439]: New session 26 of user core. Jan 29 11:54:17.308939 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:54:17.633315 sshd[6031]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:17.646342 systemd[1]: sshd@25-10.0.0.52:22-10.0.0.1:42078.service: Deactivated successfully. Jan 29 11:54:17.648752 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:54:17.650470 systemd-logind[1439]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:54:17.658353 systemd[1]: Started sshd@26-10.0.0.52:22-10.0.0.1:42086.service - OpenSSH per-connection server daemon (10.0.0.1:42086). Jan 29 11:54:17.659472 systemd-logind[1439]: Removed session 26. Jan 29 11:54:17.691904 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 42086 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:17.693586 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:17.698299 systemd-logind[1439]: New session 27 of user core. Jan 29 11:54:17.702993 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:54:19.450957 sshd[6045]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:19.470187 systemd[1]: Started sshd@27-10.0.0.52:22-10.0.0.1:42094.service - OpenSSH per-connection server daemon (10.0.0.1:42094). Jan 29 11:54:19.470949 systemd[1]: sshd@26-10.0.0.52:22-10.0.0.1:42086.service: Deactivated successfully. Jan 29 11:54:19.479210 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:54:19.480474 systemd-logind[1439]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:54:19.482310 systemd-logind[1439]: Removed session 27. Jan 29 11:54:19.512126 sshd[6087]: Accepted publickey for core from 10.0.0.1 port 42094 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:19.513833 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:19.518103 systemd-logind[1439]: New session 28 of user core. Jan 29 11:54:19.532083 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 11:54:19.796456 sshd[6087]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:19.809298 systemd[1]: sshd@27-10.0.0.52:22-10.0.0.1:42094.service: Deactivated successfully. Jan 29 11:54:19.812635 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 11:54:19.815933 systemd-logind[1439]: Session 28 logged out. Waiting for processes to exit. Jan 29 11:54:19.826227 systemd[1]: Started sshd@28-10.0.0.52:22-10.0.0.1:42104.service - OpenSSH per-connection server daemon (10.0.0.1:42104). Jan 29 11:54:19.827406 systemd-logind[1439]: Removed session 28. Jan 29 11:54:19.857939 sshd[6103]: Accepted publickey for core from 10.0.0.1 port 42104 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:19.860269 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:19.865975 systemd-logind[1439]: New session 29 of user core. Jan 29 11:54:19.870935 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 11:54:19.994309 sshd[6103]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:19.999891 systemd[1]: sshd@28-10.0.0.52:22-10.0.0.1:42104.service: Deactivated successfully. Jan 29 11:54:20.002429 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 11:54:20.003371 systemd-logind[1439]: Session 29 logged out. Waiting for processes to exit. Jan 29 11:54:20.004567 systemd-logind[1439]: Removed session 29. Jan 29 11:54:22.415531 kubelet[2506]: E0129 11:54:22.415459 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:25.006219 systemd[1]: Started sshd@29-10.0.0.52:22-10.0.0.1:39792.service - OpenSSH per-connection server daemon (10.0.0.1:39792). Jan 29 11:54:25.041770 sshd[6162]: Accepted publickey for core from 10.0.0.1 port 39792 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:25.043708 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:25.048568 systemd-logind[1439]: New session 30 of user core. Jan 29 11:54:25.056938 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 29 11:54:25.165729 sshd[6162]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:25.170629 systemd[1]: sshd@29-10.0.0.52:22-10.0.0.1:39792.service: Deactivated successfully. Jan 29 11:54:25.173392 systemd[1]: session-30.scope: Deactivated successfully. Jan 29 11:54:25.174168 systemd-logind[1439]: Session 30 logged out. Waiting for processes to exit. Jan 29 11:54:25.175220 systemd-logind[1439]: Removed session 30. Jan 29 11:54:25.822518 kubelet[2506]: E0129 11:54:25.822479 2506 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:54:30.182250 systemd[1]: Started sshd@30-10.0.0.52:22-10.0.0.1:39804.service - OpenSSH per-connection server daemon (10.0.0.1:39804). Jan 29 11:54:30.218899 sshd[6181]: Accepted publickey for core from 10.0.0.1 port 39804 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:30.220952 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:30.226663 systemd-logind[1439]: New session 31 of user core. Jan 29 11:54:30.236009 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 29 11:54:30.355018 sshd[6181]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:30.359040 systemd[1]: sshd@30-10.0.0.52:22-10.0.0.1:39804.service: Deactivated successfully. Jan 29 11:54:30.361837 systemd[1]: session-31.scope: Deactivated successfully. Jan 29 11:54:30.364532 systemd-logind[1439]: Session 31 logged out. Waiting for processes to exit. Jan 29 11:54:30.365969 systemd-logind[1439]: Removed session 31. Jan 29 11:54:35.376751 systemd[1]: Started sshd@31-10.0.0.52:22-10.0.0.1:48786.service - OpenSSH per-connection server daemon (10.0.0.1:48786). Jan 29 11:54:35.415424 sshd[6195]: Accepted publickey for core from 10.0.0.1 port 48786 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:35.417770 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:35.423865 systemd-logind[1439]: New session 32 of user core. Jan 29 11:54:35.434066 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 29 11:54:35.549191 sshd[6195]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:35.553478 systemd[1]: sshd@31-10.0.0.52:22-10.0.0.1:48786.service: Deactivated successfully. Jan 29 11:54:35.556308 systemd[1]: session-32.scope: Deactivated successfully. Jan 29 11:54:35.558939 systemd-logind[1439]: Session 32 logged out. Waiting for processes to exit. Jan 29 11:54:35.560666 systemd-logind[1439]: Removed session 32. Jan 29 11:54:37.814579 containerd[1454]: time="2025-01-29T11:54:37.814523802Z" level=info msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\"" Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.859 [WARNING][6224] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0", GenerateName:"calico-apiserver-5f474569cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bdf8891f-6b1d-4211-be98-e56b0b0de0ad", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f474569cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d", Pod:"calico-apiserver-5f474569cb-m47z4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f3ab3f7321", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.860 [INFO][6224] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.860 [INFO][6224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" iface="eth0" netns="" Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.860 [INFO][6224] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.860 [INFO][6224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.892 [INFO][6234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" HandleID="k8s-pod-network.34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.892 [INFO][6234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.892 [INFO][6234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.898 [WARNING][6234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" HandleID="k8s-pod-network.34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.898 [INFO][6234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" HandleID="k8s-pod-network.34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.899 [INFO][6234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:37.904993 containerd[1454]: 2025-01-29 11:54:37.902 [INFO][6224] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:37.905848 containerd[1454]: time="2025-01-29T11:54:37.905032805Z" level=info msg="TearDown network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" successfully" Jan 29 11:54:37.905848 containerd[1454]: time="2025-01-29T11:54:37.905073993Z" level=info msg="StopPodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" returns successfully" Jan 29 11:54:37.905848 containerd[1454]: time="2025-01-29T11:54:37.905723087Z" level=info msg="RemovePodSandbox for \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\"" Jan 29 11:54:37.907971 containerd[1454]: time="2025-01-29T11:54:37.907932145Z" level=info msg="Forcibly stopping sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\"" Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.950 [WARNING][6257] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0", GenerateName:"calico-apiserver-5f474569cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"bdf8891f-6b1d-4211-be98-e56b0b0de0ad", ResourceVersion:"1255", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f474569cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e19cbadac29b9f4238ec65b5683ee84fc3e618f57f47a09596d8121efd33136d", Pod:"calico-apiserver-5f474569cb-m47z4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f3ab3f7321", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.950 [INFO][6257] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.950 [INFO][6257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" iface="eth0" netns="" Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.950 [INFO][6257] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.950 [INFO][6257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.977 [INFO][6270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" HandleID="k8s-pod-network.34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.977 [INFO][6270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.977 [INFO][6270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.983 [WARNING][6270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" HandleID="k8s-pod-network.34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.983 [INFO][6270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" HandleID="k8s-pod-network.34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Workload="localhost-k8s-calico--apiserver--5f474569cb--m47z4-eth0" Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.984 [INFO][6270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:37.989618 containerd[1454]: 2025-01-29 11:54:37.987 [INFO][6257] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf" Jan 29 11:54:37.990121 containerd[1454]: time="2025-01-29T11:54:37.989670522Z" level=info msg="TearDown network for sandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" successfully" Jan 29 11:54:38.004035 containerd[1454]: time="2025-01-29T11:54:38.003983283Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:54:38.004125 containerd[1454]: time="2025-01-29T11:54:38.004055860Z" level=info msg="RemovePodSandbox \"34c2f92ec3a557e1c27291dc8b0c89728c767ac56ecc22cbce5c85fc26832cdf\" returns successfully" Jan 29 11:54:38.004742 containerd[1454]: time="2025-01-29T11:54:38.004705807Z" level=info msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\"" Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.042 [WARNING][6293] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m4sk6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37021594-588c-4d5f-936f-12a90ea44463", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593", Pod:"csi-node-driver-m4sk6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3fdfabe24d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.042 [INFO][6293] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.042 [INFO][6293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" iface="eth0" netns="" Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.042 [INFO][6293] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.042 [INFO][6293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.062 [INFO][6300] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" HandleID="k8s-pod-network.35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.062 [INFO][6300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.062 [INFO][6300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.067 [WARNING][6300] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" HandleID="k8s-pod-network.35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.067 [INFO][6300] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" HandleID="k8s-pod-network.35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.069 [INFO][6300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:38.075039 containerd[1454]: 2025-01-29 11:54:38.072 [INFO][6293] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:54:38.075039 containerd[1454]: time="2025-01-29T11:54:38.074976997Z" level=info msg="TearDown network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\" successfully" Jan 29 11:54:38.075039 containerd[1454]: time="2025-01-29T11:54:38.075011471Z" level=info msg="StopPodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\" returns successfully" Jan 29 11:54:38.075739 containerd[1454]: time="2025-01-29T11:54:38.075696734Z" level=info msg="RemovePodSandbox for \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\"" Jan 29 11:54:38.075739 containerd[1454]: time="2025-01-29T11:54:38.075728263Z" level=info msg="Forcibly stopping sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\"" Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.117 [WARNING][6324] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m4sk6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"37021594-588c-4d5f-936f-12a90ea44463", ResourceVersion:"1197", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3139b3deeed0c21bb0b8e302b67758581ce9777d6335fd653de8035f4dc00593", Pod:"csi-node-driver-m4sk6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3fdfabe24d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.118 [INFO][6324] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.118 [INFO][6324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" iface="eth0" netns="" Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.118 [INFO][6324] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.118 [INFO][6324] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.144 [INFO][6332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" HandleID="k8s-pod-network.35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.144 [INFO][6332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.144 [INFO][6332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.151 [WARNING][6332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" HandleID="k8s-pod-network.35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.151 [INFO][6332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" HandleID="k8s-pod-network.35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Workload="localhost-k8s-csi--node--driver--m4sk6-eth0" Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.153 [INFO][6332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:38.162824 containerd[1454]: 2025-01-29 11:54:38.156 [INFO][6324] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb" Jan 29 11:54:38.162824 containerd[1454]: time="2025-01-29T11:54:38.159997499Z" level=info msg="TearDown network for sandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\" successfully" Jan 29 11:54:38.164314 containerd[1454]: time="2025-01-29T11:54:38.164256504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:54:38.164374 containerd[1454]: time="2025-01-29T11:54:38.164359047Z" level=info msg="RemovePodSandbox \"35e81f085cac1232d1c1f7c17e8d236d62f7c6b3aa99163b4e3f75dd4f139ddb\" returns successfully" Jan 29 11:54:38.165002 containerd[1454]: time="2025-01-29T11:54:38.164952687Z" level=info msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\"" Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.203 [WARNING][6355] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0", GenerateName:"calico-kube-controllers-65d8d589cb-", Namespace:"calico-system", SelfLink:"", UID:"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d8d589cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475", Pod:"calico-kube-controllers-65d8d589cb-k69s4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali197fc212388", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.203 [INFO][6355] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.203 [INFO][6355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" iface="eth0" netns="" Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.204 [INFO][6355] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.204 [INFO][6355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.227 [INFO][6363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" HandleID="k8s-pod-network.9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.228 [INFO][6363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.228 [INFO][6363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.234 [WARNING][6363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" HandleID="k8s-pod-network.9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.234 [INFO][6363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" HandleID="k8s-pod-network.9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.236 [INFO][6363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:38.242640 containerd[1454]: 2025-01-29 11:54:38.239 [INFO][6355] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:54:38.243222 containerd[1454]: time="2025-01-29T11:54:38.242697154Z" level=info msg="TearDown network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\" successfully" Jan 29 11:54:38.243222 containerd[1454]: time="2025-01-29T11:54:38.242736509Z" level=info msg="StopPodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\" returns successfully" Jan 29 11:54:38.243525 containerd[1454]: time="2025-01-29T11:54:38.243473068Z" level=info msg="RemovePodSandbox for \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\"" Jan 29 11:54:38.243593 containerd[1454]: time="2025-01-29T11:54:38.243529866Z" level=info msg="Forcibly stopping sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\"" Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.282 [WARNING][6385] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0", GenerateName:"calico-kube-controllers-65d8d589cb-", Namespace:"calico-system", SelfLink:"", UID:"3a86a3c6-ce04-4ad6-b5ff-50ddd63f199a", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d8d589cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45e2bd968a1e2cfa03edaf46a240557181f8e690d5d49f55fe9cf3e1a0ef7475", Pod:"calico-kube-controllers-65d8d589cb-k69s4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali197fc212388", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.283 [INFO][6385] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.283 [INFO][6385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" iface="eth0" netns="" Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.283 [INFO][6385] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.283 [INFO][6385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.312 [INFO][6392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" HandleID="k8s-pod-network.9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.312 [INFO][6392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.312 [INFO][6392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.328 [WARNING][6392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" HandleID="k8s-pod-network.9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.328 [INFO][6392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" HandleID="k8s-pod-network.9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Workload="localhost-k8s-calico--kube--controllers--65d8d589cb--k69s4-eth0" Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.329 [INFO][6392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:38.335347 containerd[1454]: 2025-01-29 11:54:38.332 [INFO][6385] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62" Jan 29 11:54:38.335347 containerd[1454]: time="2025-01-29T11:54:38.335302435Z" level=info msg="TearDown network for sandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\" successfully" Jan 29 11:54:38.378955 containerd[1454]: time="2025-01-29T11:54:38.378878077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:54:38.378955 containerd[1454]: time="2025-01-29T11:54:38.378964420Z" level=info msg="RemovePodSandbox \"9997a084acfef29d135d353d39e4e230f49d2bd7629c27298267948350be9b62\" returns successfully" Jan 29 11:54:38.379483 containerd[1454]: time="2025-01-29T11:54:38.379451910Z" level=info msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\"" Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.419 [WARNING][6415] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hfp24-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"92902e69-b7b9-4835-bfaf-552ffb2affbd", ResourceVersion:"1199", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1", Pod:"coredns-6f6b679f8f-hfp24", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie57df460d32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.419 [INFO][6415] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.419 [INFO][6415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" iface="eth0" netns="" Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.419 [INFO][6415] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.419 [INFO][6415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.446 [INFO][6423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" HandleID="k8s-pod-network.5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.446 [INFO][6423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.446 [INFO][6423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.452 [WARNING][6423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" HandleID="k8s-pod-network.5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.452 [INFO][6423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" HandleID="k8s-pod-network.5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.453 [INFO][6423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:38.458194 containerd[1454]: 2025-01-29 11:54:38.455 [INFO][6415] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:38.458753 containerd[1454]: time="2025-01-29T11:54:38.458237000Z" level=info msg="TearDown network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" successfully" Jan 29 11:54:38.458753 containerd[1454]: time="2025-01-29T11:54:38.458265974Z" level=info msg="StopPodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" returns successfully" Jan 29 11:54:38.458874 containerd[1454]: time="2025-01-29T11:54:38.458845037Z" level=info msg="RemovePodSandbox for \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\"" Jan 29 11:54:38.458911 containerd[1454]: time="2025-01-29T11:54:38.458877177Z" level=info msg="Forcibly stopping sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\"" Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.497 [WARNING][6446] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hfp24-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"92902e69-b7b9-4835-bfaf-552ffb2affbd", ResourceVersion:"1199", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7362cd104e044d986563e66b8e584cd50d1741f5d8f36fddf2f2a747290694a1", Pod:"coredns-6f6b679f8f-hfp24", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie57df460d32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.497 [INFO][6446] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.497 [INFO][6446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" iface="eth0" netns="" Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.497 [INFO][6446] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.497 [INFO][6446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.519 [INFO][6453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" HandleID="k8s-pod-network.5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.520 [INFO][6453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.520 [INFO][6453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.525 [WARNING][6453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" HandleID="k8s-pod-network.5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.525 [INFO][6453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" HandleID="k8s-pod-network.5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Workload="localhost-k8s-coredns--6f6b679f8f--hfp24-eth0" Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.527 [INFO][6453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:38.532909 containerd[1454]: 2025-01-29 11:54:38.530 [INFO][6446] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6" Jan 29 11:54:38.534126 containerd[1454]: time="2025-01-29T11:54:38.532962032Z" level=info msg="TearDown network for sandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" successfully" Jan 29 11:54:38.594518 containerd[1454]: time="2025-01-29T11:54:38.594363049Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:54:38.594518 containerd[1454]: time="2025-01-29T11:54:38.594455904Z" level=info msg="RemovePodSandbox \"5e3356804e12ebd900dac984b941ee459543d84a4983ac344ebf283e207bf8f6\" returns successfully" Jan 29 11:54:38.595298 containerd[1454]: time="2025-01-29T11:54:38.595253018Z" level=info msg="StopPodSandbox for \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\"" Jan 29 11:54:38.595450 containerd[1454]: time="2025-01-29T11:54:38.595419482Z" level=info msg="TearDown network for sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" successfully" Jan 29 11:54:38.595518 containerd[1454]: time="2025-01-29T11:54:38.595492430Z" level=info msg="StopPodSandbox for \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" returns successfully" Jan 29 11:54:38.595930 containerd[1454]: time="2025-01-29T11:54:38.595907662Z" level=info msg="RemovePodSandbox for \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\"" Jan 29 11:54:38.596051 containerd[1454]: time="2025-01-29T11:54:38.595931468Z" level=info msg="Forcibly stopping sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\"" Jan 29 11:54:38.596051 containerd[1454]: time="2025-01-29T11:54:38.595981662Z" level=info msg="TearDown network for sandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" successfully" Jan 29 11:54:38.675779 containerd[1454]: time="2025-01-29T11:54:38.675709311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:54:38.675779 containerd[1454]: time="2025-01-29T11:54:38.675817966Z" level=info msg="RemovePodSandbox \"cfe957384c2d89aeae62bf6619217622f54062d67ceb3b45fc54df9f2ad8c692\" returns successfully" Jan 29 11:54:38.676418 containerd[1454]: time="2025-01-29T11:54:38.676380537Z" level=info msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\"" Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.715 [WARNING][6476] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wck8n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"fd1af382-4da8-46d6-b100-8da54f486a77", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8", Pod:"coredns-6f6b679f8f-wck8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4c59bd09cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.715 [INFO][6476] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.715 [INFO][6476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" iface="eth0" netns="" Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.715 [INFO][6476] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.715 [INFO][6476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.746 [INFO][6484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" HandleID="k8s-pod-network.972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.746 [INFO][6484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.746 [INFO][6484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.751 [WARNING][6484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" HandleID="k8s-pod-network.972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.751 [INFO][6484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" HandleID="k8s-pod-network.972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.752 [INFO][6484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:38.758378 containerd[1454]: 2025-01-29 11:54:38.755 [INFO][6476] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:54:38.758846 containerd[1454]: time="2025-01-29T11:54:38.758447609Z" level=info msg="TearDown network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\" successfully" Jan 29 11:54:38.758846 containerd[1454]: time="2025-01-29T11:54:38.758489378Z" level=info msg="StopPodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\" returns successfully" Jan 29 11:54:38.759132 containerd[1454]: time="2025-01-29T11:54:38.759099659Z" level=info msg="RemovePodSandbox for \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\"" Jan 29 11:54:38.759163 containerd[1454]: time="2025-01-29T11:54:38.759135717Z" level=info msg="Forcibly stopping sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\"" Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.797 [WARNING][6506] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wck8n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"fd1af382-4da8-46d6-b100-8da54f486a77", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51ab68b5850b0e6a057c705e61953eb6b09527ebacde6e6266bce2dfe29734c8", Pod:"coredns-6f6b679f8f-wck8n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4c59bd09cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.797 [INFO][6506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.797 [INFO][6506] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" iface="eth0" netns="" Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.797 [INFO][6506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.797 [INFO][6506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.821 [INFO][6514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" HandleID="k8s-pod-network.972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.822 [INFO][6514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.822 [INFO][6514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.842 [WARNING][6514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" HandleID="k8s-pod-network.972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.843 [INFO][6514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" HandleID="k8s-pod-network.972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Workload="localhost-k8s-coredns--6f6b679f8f--wck8n-eth0" Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.844 [INFO][6514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:38.850311 containerd[1454]: 2025-01-29 11:54:38.847 [INFO][6506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498" Jan 29 11:54:38.850311 containerd[1454]: time="2025-01-29T11:54:38.850267729Z" level=info msg="TearDown network for sandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\" successfully" Jan 29 11:54:38.878004 containerd[1454]: time="2025-01-29T11:54:38.877946272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:54:38.878100 containerd[1454]: time="2025-01-29T11:54:38.878025331Z" level=info msg="RemovePodSandbox \"972c74fa69f6901be7fcca9827aa688979b5e2a271db05e797c3759b3a884498\" returns successfully" Jan 29 11:54:38.878584 containerd[1454]: time="2025-01-29T11:54:38.878555842Z" level=info msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\"" Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.914 [WARNING][6537] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0", GenerateName:"calico-apiserver-5f474569cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6dfda52-b36b-4860-a295-437d50d36570", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f474569cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff", Pod:"calico-apiserver-5f474569cb-q4cqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6af79a63272", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.914 [INFO][6537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.914 [INFO][6537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" iface="eth0" netns="" Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.914 [INFO][6537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.914 [INFO][6537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.937 [INFO][6545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" HandleID="k8s-pod-network.133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.937 [INFO][6545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.937 [INFO][6545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.967 [WARNING][6545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" HandleID="k8s-pod-network.133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.967 [INFO][6545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" HandleID="k8s-pod-network.133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.969 [INFO][6545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:38.974330 containerd[1454]: 2025-01-29 11:54:38.971 [INFO][6537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:38.975334 containerd[1454]: time="2025-01-29T11:54:38.974381868Z" level=info msg="TearDown network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" successfully" Jan 29 11:54:38.975334 containerd[1454]: time="2025-01-29T11:54:38.974409220Z" level=info msg="StopPodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" returns successfully" Jan 29 11:54:38.975334 containerd[1454]: time="2025-01-29T11:54:38.974999754Z" level=info msg="RemovePodSandbox for \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\"" Jan 29 11:54:38.975334 containerd[1454]: time="2025-01-29T11:54:38.975041343Z" level=info msg="Forcibly stopping sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\"" Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.016 [WARNING][6569] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0", GenerateName:"calico-apiserver-5f474569cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6dfda52-b36b-4860-a295-437d50d36570", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 52, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f474569cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7d9523c35a94766d8fdd6ef11fe6d6210400f29aae2e0f6fbe6c07dc4591fff", Pod:"calico-apiserver-5f474569cb-q4cqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6af79a63272", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.016 [INFO][6569] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.016 [INFO][6569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" iface="eth0" netns="" Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.016 [INFO][6569] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.016 [INFO][6569] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.040 [INFO][6576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" HandleID="k8s-pod-network.133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.040 [INFO][6576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.041 [INFO][6576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.046 [WARNING][6576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" HandleID="k8s-pod-network.133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.046 [INFO][6576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" HandleID="k8s-pod-network.133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Workload="localhost-k8s-calico--apiserver--5f474569cb--q4cqv-eth0" Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.047 [INFO][6576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:54:39.053397 containerd[1454]: 2025-01-29 11:54:39.050 [INFO][6569] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b" Jan 29 11:54:39.053911 containerd[1454]: time="2025-01-29T11:54:39.053436588Z" level=info msg="TearDown network for sandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" successfully" Jan 29 11:54:39.138449 containerd[1454]: time="2025-01-29T11:54:39.138273610Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:54:39.138449 containerd[1454]: time="2025-01-29T11:54:39.138358179Z" level=info msg="RemovePodSandbox \"133cf1d900606a9bc7f2b28f6ad7653508648458049b34526ae18c788831037b\" returns successfully" Jan 29 11:54:40.561696 systemd[1]: Started sshd@32-10.0.0.52:22-10.0.0.1:48792.service - OpenSSH per-connection server daemon (10.0.0.1:48792). Jan 29 11:54:40.599037 sshd[6585]: Accepted publickey for core from 10.0.0.1 port 48792 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:54:40.600951 sshd[6585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:54:40.605200 systemd-logind[1439]: New session 33 of user core. Jan 29 11:54:40.614923 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 29 11:54:40.758452 sshd[6585]: pam_unix(sshd:session): session closed for user core Jan 29 11:54:40.763703 systemd[1]: sshd@32-10.0.0.52:22-10.0.0.1:48792.service: Deactivated successfully. Jan 29 11:54:40.766021 systemd[1]: session-33.scope: Deactivated successfully. Jan 29 11:54:40.766980 systemd-logind[1439]: Session 33 logged out. Waiting for processes to exit. Jan 29 11:54:40.768120 systemd-logind[1439]: Removed session 33.