Jan 29 11:56:43.978722 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 11:56:43.978747 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:56:43.978758 kernel: BIOS-provided physical RAM map: Jan 29 11:56:43.978764 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:56:43.978770 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:56:43.978776 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:56:43.978784 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 11:56:43.978790 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 11:56:43.978796 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 11:56:43.978804 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 11:56:43.978811 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:56:43.978817 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:56:43.978823 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 11:56:43.978829 kernel: NX (Execute Disable) protection: active Jan 29 11:56:43.978837 kernel: APIC: Static calls initialized Jan 29 11:56:43.978846 kernel: SMBIOS 2.8 present. Jan 29 11:56:43.978853 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 11:56:43.978859 kernel: Hypervisor detected: KVM Jan 29 11:56:43.978866 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:56:43.978873 kernel: kvm-clock: using sched offset of 2821307864 cycles Jan 29 11:56:43.978880 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:56:43.978887 kernel: tsc: Detected 2794.750 MHz processor Jan 29 11:56:43.978894 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:56:43.978901 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:56:43.978910 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 11:56:43.978917 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:56:43.978924 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:56:43.978931 kernel: Using GB pages for direct mapping Jan 29 11:56:43.978937 kernel: ACPI: Early table checksum verification disabled Jan 29 11:56:43.978944 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 11:56:43.978951 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:43.978958 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:43.978965 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:43.978974 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 11:56:43.978981 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:43.978988 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:43.978995 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:43.979001 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:56:43.979008 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 11:56:43.979015 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 11:56:43.979026 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 11:56:43.979035 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 11:56:43.979042 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 11:56:43.979049 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 11:56:43.979059 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 11:56:43.979069 kernel: No NUMA configuration found Jan 29 11:56:43.979078 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 11:56:43.979095 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 11:56:43.979105 kernel: Zone ranges: Jan 29 11:56:43.979117 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:56:43.979129 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 11:56:43.979136 kernel: Normal empty Jan 29 11:56:43.979146 kernel: Movable zone start for each node Jan 29 11:56:43.979154 kernel: Early memory node ranges Jan 29 11:56:43.979161 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:56:43.979168 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 11:56:43.979175 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 11:56:43.979184 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:56:43.979191 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:56:43.979198 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 11:56:43.979205 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:56:43.979212 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:56:43.979227 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:56:43.979234 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:56:43.979241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:56:43.979248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:56:43.979257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:56:43.979265 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:56:43.979272 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:56:43.979279 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:56:43.979286 kernel: TSC deadline timer available Jan 29 11:56:43.979293 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 11:56:43.979300 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:56:43.979307 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 11:56:43.979314 kernel: kvm-guest: setup PV sched yield Jan 29 11:56:43.979323 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 11:56:43.979330 kernel: Booting paravirtualized kernel on KVM Jan 29 11:56:43.979338 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:56:43.979345 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 11:56:43.979352 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 11:56:43.979359 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 11:56:43.979366 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 11:56:43.979373 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:56:43.979380 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:56:43.979391 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:56:43.979399 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:56:43.979406 kernel: random: crng init done Jan 29 11:56:43.979413 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:56:43.979420 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:56:43.979427 kernel: Fallback order for Node 0: 0 Jan 29 11:56:43.979435 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 11:56:43.979442 kernel: Policy zone: DMA32 Jan 29 11:56:43.979449 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:56:43.979459 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 29 11:56:43.979466 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:56:43.979473 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 11:56:43.979480 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:56:43.979487 kernel: Dynamic Preempt: voluntary Jan 29 11:56:43.979494 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:56:43.979502 kernel: rcu: RCU event tracing is enabled. Jan 29 11:56:43.979509 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:56:43.979519 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:56:43.979526 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:56:43.979533 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:56:43.979540 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:56:43.979548 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:56:43.979555 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 11:56:43.979562 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:56:43.979569 kernel: Console: colour VGA+ 80x25 Jan 29 11:56:43.979576 kernel: printk: console [ttyS0] enabled Jan 29 11:56:43.979583 kernel: ACPI: Core revision 20230628 Jan 29 11:56:43.979593 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:56:43.979600 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:56:43.979607 kernel: x2apic enabled Jan 29 11:56:43.979614 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:56:43.979621 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 11:56:43.979629 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 11:56:43.979636 kernel: kvm-guest: setup PV IPIs Jan 29 11:56:43.979652 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:56:43.979660 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 11:56:43.979667 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 11:56:43.979675 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 11:56:43.979684 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 11:56:43.979692 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 11:56:43.979710 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:56:43.979718 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:56:43.979727 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:56:43.979738 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:56:43.979748 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 11:56:43.979755 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 11:56:43.979763 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:56:43.979771 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:56:43.979778 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 11:56:43.979786 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 11:56:43.979794 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 11:56:43.979803 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:56:43.979811 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:56:43.979818 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:56:43.979826 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:56:43.979833 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:56:43.979841 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:56:43.979848 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:56:43.979856 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:56:43.979863 kernel: landlock: Up and running. Jan 29 11:56:43.979873 kernel: SELinux: Initializing. Jan 29 11:56:43.979880 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:56:43.979888 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:56:43.979895 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 11:56:43.979903 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:56:43.979911 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:56:43.979918 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:56:43.979926 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 11:56:43.979933 kernel: ... version: 0 Jan 29 11:56:43.979943 kernel: ... bit width: 48 Jan 29 11:56:43.979950 kernel: ... generic registers: 6 Jan 29 11:56:43.979958 kernel: ... value mask: 0000ffffffffffff Jan 29 11:56:43.979965 kernel: ... max period: 00007fffffffffff Jan 29 11:56:43.979972 kernel: ... fixed-purpose events: 0 Jan 29 11:56:43.979980 kernel: ... event mask: 000000000000003f Jan 29 11:56:43.979987 kernel: signal: max sigframe size: 1776 Jan 29 11:56:43.979995 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:56:43.980002 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:56:43.980012 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:56:43.980019 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:56:43.980027 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 11:56:43.980034 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:56:43.980042 kernel: smpboot: Max logical packages: 1 Jan 29 11:56:43.980049 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 11:56:43.980057 kernel: devtmpfs: initialized Jan 29 11:56:43.980064 kernel: x86/mm: Memory block size: 128MB Jan 29 11:56:43.980072 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:56:43.980081 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:56:43.980089 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:56:43.980096 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:56:43.980103 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:56:43.980111 kernel: audit: type=2000 audit(1738151803.772:1): state=initialized audit_enabled=0 res=1 Jan 29 11:56:43.980118 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:56:43.980126 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:56:43.980133 kernel: cpuidle: using governor menu Jan 29 11:56:43.980141 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:56:43.980150 kernel: dca service started, version 1.12.1 Jan 29 11:56:43.980158 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 11:56:43.980165 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 11:56:43.980173 kernel: PCI: Using configuration type 1 for base access Jan 29 11:56:43.980180 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:56:43.980188 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:56:43.980195 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:56:43.980203 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:56:43.980210 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:56:43.980227 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:56:43.980235 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:56:43.980242 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:56:43.980249 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:56:43.980257 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:56:43.980264 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:56:43.980272 kernel: ACPI: Interpreter enabled Jan 29 11:56:43.980279 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:56:43.980286 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:56:43.980296 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:56:43.980304 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:56:43.980311 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 11:56:43.980318 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:56:43.980510 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:56:43.980638 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 11:56:43.980811 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 11:56:43.980821 kernel: PCI host bridge to bus 0000:00 Jan 29 11:56:43.980957 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:56:43.981110 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:56:43.981251 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:56:43.981364 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 11:56:43.981471 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:56:43.981579 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 11:56:43.981692 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:56:43.981843 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 11:56:43.981981 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 11:56:43.982102 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 11:56:43.982230 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 11:56:43.982391 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 11:56:43.982539 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:56:43.982683 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:56:43.982836 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 11:56:43.982956 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 11:56:43.983075 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 11:56:43.983206 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:56:43.983383 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:56:43.983514 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 11:56:43.983639 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 11:56:43.983797 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:56:43.983951 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 11:56:43.984079 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 11:56:43.984199 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 11:56:43.984329 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 11:56:43.984471 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 11:56:43.984598 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 11:56:43.984758 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 11:56:43.984894 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 11:56:43.985013 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 11:56:43.985142 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 11:56:43.985271 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 11:56:43.985285 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:56:43.985293 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:56:43.985301 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:56:43.985309 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:56:43.985316 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 11:56:43.985324 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 11:56:43.985332 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 11:56:43.985339 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 11:56:43.985346 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 11:56:43.985356 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 11:56:43.985364 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 11:56:43.985371 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 11:56:43.985379 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 11:56:43.985386 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 11:56:43.985394 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 11:56:43.985401 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 11:56:43.985409 kernel: iommu: Default domain type: Translated Jan 29 11:56:43.985416 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:56:43.985426 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:56:43.985434 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:56:43.985441 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:56:43.985448 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 11:56:43.985569 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 11:56:43.985688 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 11:56:43.985845 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:56:43.985858 kernel: vgaarb: loaded Jan 29 11:56:43.985872 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:56:43.985882 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:56:43.985891 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:56:43.985900 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:56:43.985910 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:56:43.985920 kernel: pnp: PnP ACPI init Jan 29 11:56:43.986063 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 11:56:43.986074 kernel: pnp: PnP ACPI: found 6 devices Jan 29 11:56:43.986082 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:56:43.986092 kernel: NET: Registered PF_INET protocol family Jan 29 11:56:43.986100 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:56:43.986108 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:56:43.986115 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:56:43.986123 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:56:43.986131 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:56:43.986139 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:56:43.986146 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:56:43.986156 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:56:43.986164 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:56:43.986172 kernel: NET: Registered PF_XDP protocol family Jan 29 11:56:43.986293 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:56:43.986405 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:56:43.986516 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:56:43.986625 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 11:56:43.986826 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 11:56:43.986936 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 11:56:43.986951 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:56:43.986958 kernel: Initialise system trusted keyrings Jan 29 11:56:43.986966 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:56:43.986974 kernel: Key type asymmetric registered Jan 29 11:56:43.986981 kernel: Asymmetric key parser 'x509' registered Jan 29 11:56:43.986989 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:56:43.986996 kernel: io scheduler mq-deadline registered Jan 29 11:56:43.987004 kernel: io scheduler kyber registered Jan 29 11:56:43.987011 kernel: io scheduler bfq registered Jan 29 11:56:43.987021 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:56:43.987030 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 11:56:43.987037 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 11:56:43.987045 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 11:56:43.987052 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:56:43.987060 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:56:43.987068 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:56:43.987076 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:56:43.987083 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:56:43.987093 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:56:43.987216 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 11:56:43.987339 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 11:56:43.987451 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T11:56:43 UTC (1738151803) Jan 29 11:56:43.987561 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 11:56:43.987571 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 11:56:43.987579 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:56:43.987586 kernel: Segment Routing with IPv6 Jan 29 11:56:43.987597 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:56:43.987605 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:56:43.987612 kernel: Key type dns_resolver registered Jan 29 11:56:43.987620 kernel: IPI shorthand broadcast: enabled Jan 29 11:56:43.987627 kernel: sched_clock: Marking stable (638002364, 134031138)->(837320515, -65287013) Jan 29 11:56:43.987635 kernel: registered taskstats version 1 Jan 29 11:56:43.987642 kernel: Loading compiled-in X.509 certificates Jan 29 11:56:43.987650 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 11:56:43.987658 kernel: Key type .fscrypt registered Jan 29 11:56:43.987667 kernel: Key type fscrypt-provisioning registered Jan 29 11:56:43.987675 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:56:43.987683 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:56:43.987690 kernel: ima: No architecture policies found Jan 29 11:56:43.987709 kernel: clk: Disabling unused clocks Jan 29 11:56:43.987717 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 11:56:43.987725 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:56:43.987732 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 11:56:43.987743 kernel: Run /init as init process Jan 29 11:56:43.987750 kernel: with arguments: Jan 29 11:56:43.987757 kernel: /init Jan 29 11:56:43.987765 kernel: with environment: Jan 29 11:56:43.987772 kernel: HOME=/ Jan 29 11:56:43.987779 kernel: TERM=linux Jan 29 11:56:43.987787 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:56:43.987797 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:56:43.987807 systemd[1]: Detected virtualization kvm. Jan 29 11:56:43.987817 systemd[1]: Detected architecture x86-64. Jan 29 11:56:43.987825 systemd[1]: Running in initrd. Jan 29 11:56:43.987833 systemd[1]: No hostname configured, using default hostname. Jan 29 11:56:43.987840 systemd[1]: Hostname set to . Jan 29 11:56:43.987849 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:56:43.987857 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:56:43.987865 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:56:43.987873 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:56:43.987884 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:56:43.987904 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:56:43.987914 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:56:43.987923 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:56:43.987935 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:56:43.987943 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:56:43.987952 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:56:43.987960 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:56:43.987968 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:56:43.987977 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:56:43.987985 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:56:43.987993 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:56:43.988002 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:56:43.988012 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:56:43.988020 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:56:43.988029 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:56:43.988037 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:56:43.988045 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:56:43.988054 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:56:43.988062 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:56:43.988070 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:56:43.988081 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:56:43.988089 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:56:43.988097 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:56:43.988106 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:56:43.988114 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:56:43.988122 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:43.988131 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:56:43.988139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:56:43.988147 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:56:43.988158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:56:43.988167 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:56:43.988178 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:56:43.988204 systemd-journald[191]: Collecting audit messages is disabled. Jan 29 11:56:43.988229 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:56:43.988241 systemd-journald[191]: Journal started Jan 29 11:56:43.988259 systemd-journald[191]: Runtime Journal (/run/log/journal/90d8bf4df26a43789e95c163e64e10c9) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:56:43.981184 systemd-modules-load[194]: Inserted module 'overlay' Jan 29 11:56:44.025646 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:56:44.025663 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:56:44.025680 kernel: Bridge firewalling registered Jan 29 11:56:44.007964 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 29 11:56:44.026842 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:56:44.029362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:44.051944 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:56:44.055560 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:56:44.058204 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:56:44.069374 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:56:44.070791 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:44.073124 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:56:44.086857 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:56:44.090306 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:56:44.097884 dracut-cmdline[227]: dracut-dracut-053 Jan 29 11:56:44.100600 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 11:56:44.129812 systemd-resolved[231]: Positive Trust Anchors: Jan 29 11:56:44.129830 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:56:44.129863 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:56:44.148576 systemd-resolved[231]: Defaulting to hostname 'linux'. Jan 29 11:56:44.150819 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:56:44.152159 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:56:44.187755 kernel: SCSI subsystem initialized Jan 29 11:56:44.197739 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:56:44.209757 kernel: iscsi: registered transport (tcp) Jan 29 11:56:44.233747 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:56:44.233819 kernel: QLogic iSCSI HBA Driver Jan 29 11:56:44.280772 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:56:44.290839 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:56:44.316735 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:56:44.316788 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:56:44.318315 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:56:44.361731 kernel: raid6: avx2x4 gen() 27314 MB/s Jan 29 11:56:44.378736 kernel: raid6: avx2x2 gen() 25239 MB/s Jan 29 11:56:44.395847 kernel: raid6: avx2x1 gen() 22974 MB/s Jan 29 11:56:44.395869 kernel: raid6: using algorithm avx2x4 gen() 27314 MB/s Jan 29 11:56:44.413935 kernel: raid6: .... xor() 7929 MB/s, rmw enabled Jan 29 11:56:44.413957 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:56:44.434734 kernel: xor: automatically using best checksumming function avx Jan 29 11:56:44.600735 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:56:44.614520 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:56:44.627849 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:56:44.641537 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 29 11:56:44.646636 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:56:44.672838 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:56:44.686730 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 29 11:56:44.719682 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:56:44.738841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:56:44.800311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:56:44.809883 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:56:44.827176 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:56:44.830396 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:56:44.833230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:56:44.835759 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:56:44.841730 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 11:56:44.858316 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:56:44.858374 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:56:44.858741 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:56:44.858769 kernel: GPT:9289727 != 19775487 Jan 29 11:56:44.858789 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:56:44.858799 kernel: GPT:9289727 != 19775487 Jan 29 11:56:44.858809 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:56:44.858819 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:44.845834 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:56:44.856176 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:56:44.865160 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:56:44.865282 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:44.874522 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:56:44.874538 kernel: libata version 3.00 loaded. Jan 29 11:56:44.874549 kernel: AES CTR mode by8 optimization enabled Jan 29 11:56:44.878005 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (469) Jan 29 11:56:44.878027 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (460) Jan 29 11:56:44.874622 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:56:44.879866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:56:44.882400 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:44.885234 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:44.895335 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 11:56:44.911773 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 11:56:44.911788 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 11:56:44.911937 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 11:56:44.912078 kernel: scsi host0: ahci Jan 29 11:56:44.912241 kernel: scsi host1: ahci Jan 29 11:56:44.912393 kernel: scsi host2: ahci Jan 29 11:56:44.912562 kernel: scsi host3: ahci Jan 29 11:56:44.912737 kernel: scsi host4: ahci Jan 29 11:56:44.912882 kernel: scsi host5: ahci Jan 29 11:56:44.913027 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 11:56:44.913038 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 11:56:44.913048 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 11:56:44.913067 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 11:56:44.913077 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 11:56:44.913088 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 11:56:44.903994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:44.920595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:56:44.925546 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:56:44.957107 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:56:44.959716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:44.971499 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:56:44.978527 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:56:44.994833 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:56:44.996628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:56:45.004288 disk-uuid[565]: Primary Header is updated. Jan 29 11:56:45.004288 disk-uuid[565]: Secondary Entries is updated. Jan 29 11:56:45.004288 disk-uuid[565]: Secondary Header is updated. Jan 29 11:56:45.007735 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:45.011725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:45.016988 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:45.218510 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:45.218570 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:45.218591 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:45.218601 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:45.219747 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 11:56:45.220739 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 11:56:45.222036 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 11:56:45.222053 kernel: ata3.00: applying bridge limits Jan 29 11:56:45.222723 kernel: ata3.00: configured for UDMA/100 Jan 29 11:56:45.224737 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:56:45.267273 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 11:56:45.279317 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:56:45.279330 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:56:46.016670 disk-uuid[569]: The operation has completed successfully. Jan 29 11:56:46.018155 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:56:46.042971 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:56:46.043093 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:56:46.070837 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:56:46.076494 sh[590]: Success Jan 29 11:56:46.089728 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 11:56:46.123763 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:56:46.139479 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:56:46.143599 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:56:46.156544 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 11:56:46.156585 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:46.156596 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:56:46.159190 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:56:46.159209 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:56:46.164887 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:56:46.165640 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:56:46.170872 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:56:46.173748 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:56:46.186254 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:46.186284 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:46.186301 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:56:46.190859 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:56:46.200630 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:56:46.203026 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:46.272277 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:56:46.279850 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:56:46.296086 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:56:46.306132 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:56:46.386627 systemd-networkd[771]: lo: Link UP Jan 29 11:56:46.386637 systemd-networkd[771]: lo: Gained carrier Jan 29 11:56:46.388230 systemd-networkd[771]: Enumeration completed Jan 29 11:56:46.388607 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:56:46.388611 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:56:46.389513 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:56:46.389966 systemd-networkd[771]: eth0: Link UP Jan 29 11:56:46.389970 systemd-networkd[771]: eth0: Gained carrier Jan 29 11:56:46.389976 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:56:46.394215 systemd[1]: Reached target network.target - Network. Jan 29 11:56:46.410748 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:56:46.417214 ignition[758]: Ignition 2.19.0 Jan 29 11:56:46.417234 ignition[758]: Stage: fetch-offline Jan 29 11:56:46.417277 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:46.417288 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:46.417397 ignition[758]: parsed url from cmdline: "" Jan 29 11:56:46.417401 ignition[758]: no config URL provided Jan 29 11:56:46.417407 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:56:46.417417 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:56:46.417445 ignition[758]: op(1): [started] loading QEMU firmware config module Jan 29 11:56:46.417451 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:56:46.425740 ignition[758]: op(1): [finished] loading QEMU firmware config module Jan 29 11:56:46.464817 ignition[758]: parsing config with SHA512: 21ee448ee06e31e35adc36769e3c4425eb761dea9cb8ec846e97f5fd2dfe00113b1f6078229fc5da3b03f9d777cfbf422b9c832320986debaf8f2170c8b79a49 Jan 29 11:56:46.535697 unknown[758]: fetched base config from "system" Jan 29 11:56:46.536054 unknown[758]: fetched user config from "qemu" Jan 29 11:56:46.536507 ignition[758]: fetch-offline: fetch-offline passed Jan 29 11:56:46.536601 ignition[758]: Ignition finished successfully Jan 29 11:56:46.539397 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:56:46.540815 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:56:46.547850 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:56:46.572466 ignition[782]: Ignition 2.19.0 Jan 29 11:56:46.572477 ignition[782]: Stage: kargs Jan 29 11:56:46.572652 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:46.572664 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:46.573474 ignition[782]: kargs: kargs passed Jan 29 11:56:46.573518 ignition[782]: Ignition finished successfully Jan 29 11:56:46.576894 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:56:46.585955 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:56:46.610878 ignition[792]: Ignition 2.19.0 Jan 29 11:56:46.610895 ignition[792]: Stage: disks Jan 29 11:56:46.611138 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:46.611153 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:46.612259 ignition[792]: disks: disks passed Jan 29 11:56:46.614655 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:56:46.612313 ignition[792]: Ignition finished successfully Jan 29 11:56:46.616332 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:56:46.618245 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:56:46.620255 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:56:46.622514 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:56:46.623690 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:56:46.639828 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:56:46.655564 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:56:46.714264 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:56:46.721858 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:56:46.816735 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 11:56:46.817350 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:56:46.818982 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:56:46.834774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:56:46.836485 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:56:46.837716 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:56:46.837753 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:56:46.849829 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Jan 29 11:56:46.849853 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:46.849868 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:46.849883 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:56:46.849897 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:56:46.837773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:56:46.843677 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:56:46.850922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:56:46.863915 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:56:46.907936 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:56:46.912245 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:56:46.916207 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:56:46.919827 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:56:47.007787 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:56:47.017794 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:56:47.018565 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:56:47.025721 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:47.045182 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:56:47.057634 ignition[923]: INFO : Ignition 2.19.0 Jan 29 11:56:47.057634 ignition[923]: INFO : Stage: mount Jan 29 11:56:47.059240 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:47.059240 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:47.062135 ignition[923]: INFO : mount: mount passed Jan 29 11:56:47.062898 ignition[923]: INFO : Ignition finished successfully Jan 29 11:56:47.065943 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:56:47.077878 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:56:47.154632 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:56:47.170842 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:56:47.178483 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (936) Jan 29 11:56:47.178507 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 11:56:47.178518 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:56:47.179974 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:56:47.182721 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:56:47.184004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:56:47.208015 ignition[953]: INFO : Ignition 2.19.0 Jan 29 11:56:47.208015 ignition[953]: INFO : Stage: files Jan 29 11:56:47.209674 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:47.209674 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:47.209674 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:56:47.213402 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:56:47.213402 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:56:47.216933 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:56:47.218394 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:56:47.218394 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:56:47.217580 unknown[953]: wrote ssh authorized keys file for user: core Jan 29 11:56:47.222361 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:56:47.222361 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:56:47.222361 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:56:47.222361 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:56:47.260238 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:56:47.527089 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:56:47.527089 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:47.531041 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:56:47.918476 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:56:48.258895 systemd-networkd[771]: eth0: Gained IPv6LL Jan 29 11:56:48.643802 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 29 11:56:48.646698 ignition[953]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:56:48.677434 ignition[953]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:56:48.682531 ignition[953]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:56:48.684217 ignition[953]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:56:48.684217 ignition[953]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:56:48.684217 ignition[953]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:56:48.684217 ignition[953]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:56:48.684217 ignition[953]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:56:48.684217 ignition[953]: INFO : files: files passed Jan 29 11:56:48.684217 ignition[953]: INFO : Ignition finished successfully Jan 29 11:56:48.685650 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:56:48.695919 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:56:48.699344 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:56:48.701426 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:56:48.701578 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:56:48.709655 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:56:48.712899 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:56:48.714849 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:56:48.717911 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:56:48.716309 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:56:48.718137 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:56:48.727857 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:56:48.755429 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:56:48.755567 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:56:48.756934 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:56:48.759226 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:56:48.762460 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:56:48.763638 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:56:48.786600 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:56:48.798854 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:56:48.810170 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:56:48.811561 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:56:48.813968 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:56:48.816001 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:56:48.816128 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:56:48.818498 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:56:48.820062 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:56:48.822097 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:56:48.824156 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:56:48.826194 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:56:48.828404 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:56:48.830874 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:56:48.833304 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:56:48.835738 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:56:48.838206 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:56:48.840274 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:56:48.840466 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:56:48.842924 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:56:48.844362 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:56:48.846424 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:56:48.846557 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:56:48.848674 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:56:48.848859 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:56:48.851202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:56:48.851360 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:56:48.853194 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:56:48.854879 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:56:48.858775 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:56:48.860560 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:56:48.862537 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:56:48.864317 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:56:48.864455 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:56:48.866318 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:56:48.866444 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:56:48.868786 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:56:48.868954 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:56:48.870841 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:56:48.870993 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:56:48.880977 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:56:48.881975 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:56:48.882148 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:56:48.885241 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:56:48.886316 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:56:48.886517 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:56:48.889066 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:56:48.896795 ignition[1008]: INFO : Ignition 2.19.0 Jan 29 11:56:48.896795 ignition[1008]: INFO : Stage: umount Jan 29 11:56:48.896795 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:56:48.896795 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:56:48.889231 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:56:48.907741 ignition[1008]: INFO : umount: umount passed Jan 29 11:56:48.907741 ignition[1008]: INFO : Ignition finished successfully Jan 29 11:56:48.897525 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:56:48.897668 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:56:48.899829 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:56:48.899965 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:56:48.902845 systemd[1]: Stopped target network.target - Network. Jan 29 11:56:48.905082 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:56:48.905165 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:56:48.907727 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:56:48.907788 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:56:48.909842 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:56:48.909902 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:56:48.912036 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:56:48.912095 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:56:48.914592 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:56:48.917228 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:56:48.919750 systemd-networkd[771]: eth0: DHCPv6 lease lost Jan 29 11:56:48.920546 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:56:48.921255 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:56:48.921431 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:56:48.923991 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:56:48.924076 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:56:48.929806 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:56:48.931719 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:56:48.931793 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:56:48.934590 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:56:48.937212 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:56:48.937365 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:56:48.942768 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:56:48.942884 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:56:48.944207 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:56:48.944270 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:56:48.946679 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:56:48.946756 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:56:48.957816 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:56:48.958049 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:56:48.960723 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:56:48.960864 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:56:48.963299 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:56:48.963390 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:56:48.965010 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:56:48.965063 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:56:48.967265 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:56:48.967330 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:56:48.969624 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:56:48.969685 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:56:48.972076 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:56:48.972155 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:56:48.983927 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:56:48.985136 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:56:48.985204 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:56:48.987354 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:56:48.987406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:48.992917 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:56:48.993035 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:56:49.090151 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:56:49.090351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:56:49.092720 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:56:49.094661 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:56:49.094748 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:56:49.106862 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:56:49.115379 systemd[1]: Switching root. Jan 29 11:56:49.147608 systemd-journald[191]: Journal stopped Jan 29 11:56:50.387613 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jan 29 11:56:50.387688 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:56:50.387730 kernel: SELinux: policy capability open_perms=1 Jan 29 11:56:50.387775 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:56:50.387791 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:56:50.387812 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:56:50.387829 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:56:50.387849 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:56:50.387865 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:56:50.387880 kernel: audit: type=1403 audit(1738151809.566:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:56:50.387901 systemd[1]: Successfully loaded SELinux policy in 49.391ms. Jan 29 11:56:50.387928 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.552ms. Jan 29 11:56:50.387958 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:56:50.387973 systemd[1]: Detected virtualization kvm. Jan 29 11:56:50.387988 systemd[1]: Detected architecture x86-64. Jan 29 11:56:50.388003 systemd[1]: Detected first boot. Jan 29 11:56:50.388018 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:56:50.388034 zram_generator::config[1069]: No configuration found. Jan 29 11:56:50.388050 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:56:50.388065 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:56:50.388103 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:56:50.388120 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:56:50.388135 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:56:50.388150 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:56:50.388165 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:56:50.388186 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:56:50.388201 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:56:50.388219 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:56:50.388234 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:56:50.388261 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:56:50.388277 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:56:50.388293 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:56:50.388308 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:56:50.388324 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:56:50.388339 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:56:50.388354 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:56:50.388370 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:56:50.388385 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:56:50.388412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:56:50.388433 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:56:50.388449 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:56:50.388464 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:56:50.388479 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:56:50.388494 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:56:50.388510 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:56:50.388525 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:56:50.388549 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:56:50.388564 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:56:50.388583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:56:50.388598 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:56:50.388614 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:56:50.388629 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:56:50.388644 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:56:50.388660 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:50.388675 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:56:50.388715 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:56:50.388731 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:56:50.388747 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:56:50.388763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:56:50.388778 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:56:50.388794 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:56:50.388809 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:56:50.388824 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:56:50.388839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:56:50.388864 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:56:50.388883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:56:50.388898 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:56:50.388914 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 11:56:50.388930 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 11:56:50.388950 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:56:50.388966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:56:50.388984 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:56:50.389011 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:56:50.389027 kernel: fuse: init (API version 7.39) Jan 29 11:56:50.389041 kernel: loop: module loaded Jan 29 11:56:50.389056 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:56:50.389081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:50.389121 systemd-journald[1154]: Collecting audit messages is disabled. Jan 29 11:56:50.389148 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:56:50.389175 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:56:50.389190 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:56:50.389205 systemd-journald[1154]: Journal started Jan 29 11:56:50.389233 systemd-journald[1154]: Runtime Journal (/run/log/journal/90d8bf4df26a43789e95c163e64e10c9) is 6.0M, max 48.4M, 42.3M free. Jan 29 11:56:50.392777 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:56:50.394367 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:56:50.395814 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:56:50.397267 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:56:50.398993 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:56:50.400793 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:56:50.402612 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:56:50.402915 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:56:50.404810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:56:50.405164 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:56:50.407013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:56:50.407281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:56:50.409227 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:56:50.409519 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:56:50.411363 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:56:50.411640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:56:50.413526 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:56:50.415339 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:56:50.417221 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:56:50.430888 kernel: ACPI: bus type drm_connector registered Jan 29 11:56:50.431326 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:56:50.431816 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:56:50.436330 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:56:50.446966 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:56:50.450223 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:56:50.451651 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:56:50.454551 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:56:50.459429 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:56:50.461108 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:56:50.464797 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:56:50.466481 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:56:50.470722 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:56:50.479493 systemd-journald[1154]: Time spent on flushing to /var/log/journal/90d8bf4df26a43789e95c163e64e10c9 is 18.876ms for 938 entries. Jan 29 11:56:50.479493 systemd-journald[1154]: System Journal (/var/log/journal/90d8bf4df26a43789e95c163e64e10c9) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:56:50.507856 systemd-journald[1154]: Received client request to flush runtime journal. Jan 29 11:56:50.479917 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:56:50.485488 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:56:50.487157 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:56:50.503559 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:56:50.520008 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:56:50.523758 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:56:50.524667 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 29 11:56:50.524687 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 29 11:56:50.525696 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:56:50.527374 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:56:50.534315 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:56:50.535984 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:56:50.537746 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:56:50.551982 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:56:50.584862 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:56:50.597942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:56:50.616015 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 29 11:56:50.616043 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 29 11:56:50.622877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:56:51.126225 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:56:51.141896 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:56:51.168879 systemd-udevd[1235]: Using default interface naming scheme 'v255'. Jan 29 11:56:51.186458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:56:51.198896 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:56:51.225922 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:56:51.231652 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 11:56:51.299728 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1239) Jan 29 11:56:51.365179 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:56:51.377722 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:56:51.389009 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:56:51.403727 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:56:51.414744 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:56:51.459727 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 11:56:51.468674 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 11:56:51.468873 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 11:56:51.475950 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:56:51.481736 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:56:51.506231 systemd-networkd[1242]: lo: Link UP Jan 29 11:56:51.506241 systemd-networkd[1242]: lo: Gained carrier Jan 29 11:56:51.507914 systemd-networkd[1242]: Enumeration completed Jan 29 11:56:51.508162 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:56:51.508332 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:56:51.508340 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:56:51.509841 systemd-networkd[1242]: eth0: Link UP Jan 29 11:56:51.509909 systemd-networkd[1242]: eth0: Gained carrier Jan 29 11:56:51.509986 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:56:51.566194 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:56:51.581508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:56:51.602727 kernel: kvm_amd: TSC scaling supported Jan 29 11:56:51.602792 kernel: kvm_amd: Nested Virtualization enabled Jan 29 11:56:51.602845 kernel: kvm_amd: Nested Paging enabled Jan 29 11:56:51.602864 kernel: kvm_amd: LBR virtualization supported Jan 29 11:56:51.598937 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:56:51.603246 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 11:56:51.603955 kernel: kvm_amd: Virtual GIF supported Jan 29 11:56:51.625726 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:56:51.652633 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:56:51.664906 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:56:51.676531 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:56:51.707411 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:56:51.709246 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:56:51.719827 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:56:51.725808 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:56:51.764544 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:56:51.766272 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:56:51.767709 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:56:51.767734 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:56:51.768892 systemd[1]: Reached target machines.target - Containers. Jan 29 11:56:51.771212 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:56:51.781866 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:56:51.784648 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:56:51.785926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:56:51.786881 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:56:51.791832 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:56:51.795768 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:56:51.798157 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:56:51.810723 kernel: loop0: detected capacity change from 0 to 210664 Jan 29 11:56:51.811007 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:56:51.822825 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:56:51.823914 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:56:51.828719 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:56:51.855746 kernel: loop1: detected capacity change from 0 to 140768 Jan 29 11:56:51.888739 kernel: loop2: detected capacity change from 0 to 142488 Jan 29 11:56:51.927740 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 11:56:51.965747 kernel: loop4: detected capacity change from 0 to 140768 Jan 29 11:56:51.975731 kernel: loop5: detected capacity change from 0 to 142488 Jan 29 11:56:51.983320 (sd-merge)[1305]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:56:51.983941 (sd-merge)[1305]: Merged extensions into '/usr'. Jan 29 11:56:51.988187 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:56:51.988209 systemd[1]: Reloading... Jan 29 11:56:52.075725 zram_generator::config[1331]: No configuration found. Jan 29 11:56:52.210440 ldconfig[1289]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:56:52.287235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:56:52.359364 systemd[1]: Reloading finished in 370 ms. Jan 29 11:56:52.380283 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:56:52.382121 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:56:52.400886 systemd[1]: Starting ensure-sysext.service... Jan 29 11:56:52.403168 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:56:52.407559 systemd[1]: Reloading requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:56:52.407577 systemd[1]: Reloading... Jan 29 11:56:52.464562 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:56:52.464967 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:56:52.466185 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:56:52.466590 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Jan 29 11:56:52.466690 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Jan 29 11:56:52.474581 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:56:52.474596 systemd-tmpfiles[1378]: Skipping /boot Jan 29 11:56:52.482722 zram_generator::config[1407]: No configuration found. Jan 29 11:56:52.545742 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:56:52.545948 systemd-tmpfiles[1378]: Skipping /boot Jan 29 11:56:52.677831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:56:52.745589 systemd[1]: Reloading finished in 337 ms. Jan 29 11:56:52.765259 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:56:52.783636 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:56:52.786576 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:56:52.789437 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:56:52.793526 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:56:52.799006 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:56:52.807061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:52.807295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:56:52.809203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:56:52.815945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:56:52.820950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:56:52.824336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:56:52.824546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:52.826346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:56:52.826735 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:56:52.829481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:56:52.829726 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:56:52.832718 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:56:52.833047 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:56:52.843148 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:52.844044 augenrules[1481]: No rules Jan 29 11:56:52.843617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:56:52.856185 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:56:52.859982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:56:52.863695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:56:52.865409 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:56:52.865888 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:52.868205 systemd-networkd[1242]: eth0: Gained IPv6LL Jan 29 11:56:52.868937 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:56:52.871682 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:56:52.874561 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:56:52.877548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:56:52.877845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:56:52.880455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:56:52.880743 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:56:52.883529 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:56:52.883846 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:56:52.886344 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:56:52.893930 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:56:52.900634 systemd-resolved[1455]: Positive Trust Anchors: Jan 29 11:56:52.900654 systemd-resolved[1455]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:56:52.900686 systemd-resolved[1455]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:56:52.904748 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:52.904856 systemd-resolved[1455]: Defaulting to hostname 'linux'. Jan 29 11:56:52.904971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:56:52.911929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:56:52.914552 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:56:52.917173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:56:52.919921 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:56:52.921307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:56:52.922872 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:56:52.924293 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:56:52.924524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:56:52.925986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:56:52.928163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:56:52.928448 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:56:52.930391 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:56:52.930663 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:56:52.932564 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:56:52.941827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:56:52.945262 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:56:52.945513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:56:52.947554 systemd[1]: Finished ensure-sysext.service. Jan 29 11:56:52.953334 systemd[1]: Reached target network.target - Network. Jan 29 11:56:52.954523 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:56:52.955900 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:56:52.957269 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:56:52.957339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:56:52.969935 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:56:52.971903 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:56:53.035471 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:56:54.007994 systemd-resolved[1455]: Clock change detected. Flushing caches. Jan 29 11:56:54.008031 systemd-timesyncd[1523]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:56:54.008068 systemd-timesyncd[1523]: Initial clock synchronization to Wed 2025-01-29 11:56:54.007945 UTC. Jan 29 11:56:54.009342 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:56:54.010711 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:56:54.012157 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:56:54.013588 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:56:54.015044 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:56:54.015070 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:56:54.016099 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:56:54.017514 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:56:54.018868 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:56:54.020396 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:56:54.022227 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:56:54.025545 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:56:54.028427 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:56:54.034545 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:56:54.035922 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:56:54.037053 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:56:54.038360 systemd[1]: System is tainted: cgroupsv1 Jan 29 11:56:54.038409 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:56:54.038434 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:56:54.040218 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:56:54.042779 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:56:54.045531 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:56:54.049884 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:56:54.055794 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:56:54.057157 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:56:54.059253 jq[1532]: false Jan 29 11:56:54.071455 dbus-daemon[1531]: [system] SELinux support is enabled Jan 29 11:56:54.071799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:56:54.074653 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:56:54.077715 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:56:54.084832 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:56:54.086379 extend-filesystems[1534]: Found loop3 Jan 29 11:56:54.088566 extend-filesystems[1534]: Found loop4 Jan 29 11:56:54.088566 extend-filesystems[1534]: Found loop5 Jan 29 11:56:54.088566 extend-filesystems[1534]: Found sr0 Jan 29 11:56:54.088566 extend-filesystems[1534]: Found vda Jan 29 11:56:54.088566 extend-filesystems[1534]: Found vda1 Jan 29 11:56:54.088566 extend-filesystems[1534]: Found vda2 Jan 29 11:56:54.088566 extend-filesystems[1534]: Found vda3 Jan 29 11:56:54.088566 extend-filesystems[1534]: Found usr Jan 29 11:56:54.088566 extend-filesystems[1534]: Found vda4 Jan 29 11:56:54.088566 extend-filesystems[1534]: Found vda6 Jan 29 11:56:54.088566 extend-filesystems[1534]: Found vda7 Jan 29 11:56:54.088566 extend-filesystems[1534]: Found vda9 Jan 29 11:56:54.088566 extend-filesystems[1534]: Checking size of /dev/vda9 Jan 29 11:56:54.090704 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:56:54.106162 extend-filesystems[1534]: Resized partition /dev/vda9 Jan 29 11:56:54.108518 extend-filesystems[1560]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:56:54.112307 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:56:54.112702 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:56:54.114626 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1254) Jan 29 11:56:54.129075 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:56:54.131228 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:56:54.134512 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:56:54.137741 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:56:54.139295 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:56:54.141391 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:56:54.151096 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:56:54.215269 jq[1569]: true Jan 29 11:56:54.151458 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:56:54.157671 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:56:54.158156 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:56:54.206647 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:56:54.212641 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:56:54.213005 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:56:54.217972 update_engine[1566]: I20250129 11:56:54.217849 1566 main.cc:92] Flatcar Update Engine starting Jan 29 11:56:54.219702 extend-filesystems[1560]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:56:54.219702 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:56:54.219702 extend-filesystems[1560]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:56:54.227950 extend-filesystems[1534]: Resized filesystem in /dev/vda9 Jan 29 11:56:54.221513 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:56:54.232071 update_engine[1566]: I20250129 11:56:54.227964 1566 update_check_scheduler.cc:74] Next update check in 9m59s Jan 29 11:56:54.221834 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:56:54.236113 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:56:54.240356 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:56:54.240723 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:56:54.246196 jq[1578]: true Jan 29 11:56:54.270183 tar[1577]: linux-amd64/helm Jan 29 11:56:54.288714 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:56:54.290360 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:56:54.290494 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:56:54.290517 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:56:54.292029 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:56:54.292046 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:56:54.295982 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:56:54.323512 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:56:54.333525 systemd-logind[1565]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:56:54.333924 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:56:54.334429 systemd-logind[1565]: New seat seat0. Jan 29 11:56:54.336462 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:56:54.373979 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:56:54.379156 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:56:54.400203 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:56:54.413875 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:56:54.429783 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:56:54.468491 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:56:54.601120 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:56:54.609818 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:56:54.610159 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:56:54.614392 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:56:54.660532 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:56:54.673567 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:56:54.677970 containerd[1579]: time="2025-01-29T11:56:54.677526289Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 11:56:54.684144 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:56:54.685909 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:56:54.716066 containerd[1579]: time="2025-01-29T11:56:54.715915088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:54.718643 containerd[1579]: time="2025-01-29T11:56:54.718570837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:56:54.718643 containerd[1579]: time="2025-01-29T11:56:54.718632312Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:56:54.718718 containerd[1579]: time="2025-01-29T11:56:54.718653883Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:56:54.718919 containerd[1579]: time="2025-01-29T11:56:54.718883083Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:56:54.718949 containerd[1579]: time="2025-01-29T11:56:54.718933327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:54.719151 containerd[1579]: time="2025-01-29T11:56:54.719067338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:56:54.719151 containerd[1579]: time="2025-01-29T11:56:54.719090181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:54.719442 containerd[1579]: time="2025-01-29T11:56:54.719416382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:56:54.719473 containerd[1579]: time="2025-01-29T11:56:54.719441119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:54.719473 containerd[1579]: time="2025-01-29T11:56:54.719459994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:56:54.719509 containerd[1579]: time="2025-01-29T11:56:54.719475724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:54.719668 containerd[1579]: time="2025-01-29T11:56:54.719589908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:54.719948 containerd[1579]: time="2025-01-29T11:56:54.719923754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:56:54.720173 containerd[1579]: time="2025-01-29T11:56:54.720146041Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:56:54.720173 containerd[1579]: time="2025-01-29T11:56:54.720170186Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:56:54.720304 containerd[1579]: time="2025-01-29T11:56:54.720286634Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:56:54.720407 containerd[1579]: time="2025-01-29T11:56:54.720352327Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:56:54.725495 containerd[1579]: time="2025-01-29T11:56:54.725392959Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:56:54.725495 containerd[1579]: time="2025-01-29T11:56:54.725462619Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:56:54.725495 containerd[1579]: time="2025-01-29T11:56:54.725483258Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:56:54.725570 containerd[1579]: time="2025-01-29T11:56:54.725551897Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:56:54.725589 containerd[1579]: time="2025-01-29T11:56:54.725577184Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:56:54.725782 containerd[1579]: time="2025-01-29T11:56:54.725756771Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:56:54.726335 containerd[1579]: time="2025-01-29T11:56:54.726307794Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:56:54.726541 containerd[1579]: time="2025-01-29T11:56:54.726447997Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:56:54.726541 containerd[1579]: time="2025-01-29T11:56:54.726482021Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:56:54.726541 containerd[1579]: time="2025-01-29T11:56:54.726502589Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:56:54.726541 containerd[1579]: time="2025-01-29T11:56:54.726522386Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:56:54.726541 containerd[1579]: time="2025-01-29T11:56:54.726540210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:56:54.726673 containerd[1579]: time="2025-01-29T11:56:54.726558083Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:56:54.726673 containerd[1579]: time="2025-01-29T11:56:54.726587007Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:56:54.726673 containerd[1579]: time="2025-01-29T11:56:54.726634817Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:56:54.726673 containerd[1579]: time="2025-01-29T11:56:54.726653963Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:56:54.726673 containerd[1579]: time="2025-01-29T11:56:54.726671105Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:56:54.726763 containerd[1579]: time="2025-01-29T11:56:54.726688538Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:56:54.726763 containerd[1579]: time="2025-01-29T11:56:54.726718454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726763 containerd[1579]: time="2025-01-29T11:56:54.726736037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726829 containerd[1579]: time="2025-01-29T11:56:54.726765572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726829 containerd[1579]: time="2025-01-29T11:56:54.726784377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726829 containerd[1579]: time="2025-01-29T11:56:54.726800558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726829 containerd[1579]: time="2025-01-29T11:56:54.726817690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726913 containerd[1579]: time="2025-01-29T11:56:54.726832668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726913 containerd[1579]: time="2025-01-29T11:56:54.726850471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726913 containerd[1579]: time="2025-01-29T11:56:54.726868235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726913 containerd[1579]: time="2025-01-29T11:56:54.726886459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726994 containerd[1579]: time="2025-01-29T11:56:54.726912448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726994 containerd[1579]: time="2025-01-29T11:56:54.726932996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726994 containerd[1579]: time="2025-01-29T11:56:54.726949487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.726994 containerd[1579]: time="2025-01-29T11:56:54.726971989Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:56:54.727062 containerd[1579]: time="2025-01-29T11:56:54.727001004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.727062 containerd[1579]: time="2025-01-29T11:56:54.727016503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.727062 containerd[1579]: time="2025-01-29T11:56:54.727030128Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:56:54.727138 containerd[1579]: time="2025-01-29T11:56:54.727093517Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:56:54.727138 containerd[1579]: time="2025-01-29T11:56:54.727113895Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:56:54.727138 containerd[1579]: time="2025-01-29T11:56:54.727129084Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:56:54.727200 containerd[1579]: time="2025-01-29T11:56:54.727147699Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:56:54.727200 containerd[1579]: time="2025-01-29T11:56:54.727163288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.727200 containerd[1579]: time="2025-01-29T11:56:54.727179639Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:56:54.727200 containerd[1579]: time="2025-01-29T11:56:54.727198053Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:56:54.727270 containerd[1579]: time="2025-01-29T11:56:54.727212600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:56:54.728846 containerd[1579]: time="2025-01-29T11:56:54.728751576Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:56:54.728846 containerd[1579]: time="2025-01-29T11:56:54.728848598Z" level=info msg="Connect containerd service" Jan 29 11:56:54.729118 containerd[1579]: time="2025-01-29T11:56:54.728914662Z" level=info msg="using legacy CRI server" Jan 29 11:56:54.729118 containerd[1579]: time="2025-01-29T11:56:54.728934279Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:56:54.729184 containerd[1579]: time="2025-01-29T11:56:54.729150334Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:56:54.730047 containerd[1579]: time="2025-01-29T11:56:54.730007290Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:56:54.730314 containerd[1579]: time="2025-01-29T11:56:54.730260826Z" level=info msg="Start subscribing containerd event" Jan 29 11:56:54.730523 containerd[1579]: time="2025-01-29T11:56:54.730321710Z" level=info msg="Start recovering state" Jan 29 11:56:54.730523 containerd[1579]: time="2025-01-29T11:56:54.730418321Z" level=info msg="Start event monitor" Jan 29 11:56:54.730523 containerd[1579]: time="2025-01-29T11:56:54.730439320Z" level=info msg="Start snapshots syncer" Jan 29 11:56:54.730523 containerd[1579]: time="2025-01-29T11:56:54.730451914Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:56:54.730523 containerd[1579]: time="2025-01-29T11:56:54.730467423Z" level=info msg="Start streaming server" Jan 29 11:56:54.730950 containerd[1579]: time="2025-01-29T11:56:54.730819133Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:56:54.730950 containerd[1579]: time="2025-01-29T11:56:54.730928057Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:56:54.731276 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:56:54.733617 containerd[1579]: time="2025-01-29T11:56:54.733021672Z" level=info msg="containerd successfully booted in 0.056831s" Jan 29 11:56:54.984689 tar[1577]: linux-amd64/LICENSE Jan 29 11:56:54.984827 tar[1577]: linux-amd64/README.md Jan 29 11:56:55.001338 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:56:55.429498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:56:55.431890 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:56:55.434067 systemd[1]: Startup finished in 6.633s (kernel) + 4.943s (userspace) = 11.577s. Jan 29 11:56:55.435499 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:56:55.929984 kubelet[1666]: E0129 11:56:55.929773 1666 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:56:55.934362 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:56:55.934712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:57:02.839630 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:57:02.851845 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:36118.service - OpenSSH per-connection server daemon (10.0.0.1:36118). Jan 29 11:57:02.898875 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 36118 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:57:02.901155 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:02.911657 systemd-logind[1565]: New session 1 of user core. Jan 29 11:57:02.912784 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:57:02.922862 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:57:02.935355 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:57:02.938058 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:57:02.948453 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:57:03.100498 systemd[1685]: Queued start job for default target default.target. Jan 29 11:57:03.101018 systemd[1685]: Created slice app.slice - User Application Slice. Jan 29 11:57:03.101045 systemd[1685]: Reached target paths.target - Paths. Jan 29 11:57:03.101062 systemd[1685]: Reached target timers.target - Timers. Jan 29 11:57:03.112758 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:57:03.121779 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:57:03.121866 systemd[1685]: Reached target sockets.target - Sockets. Jan 29 11:57:03.121884 systemd[1685]: Reached target basic.target - Basic System. Jan 29 11:57:03.121932 systemd[1685]: Reached target default.target - Main User Target. Jan 29 11:57:03.121974 systemd[1685]: Startup finished in 165ms. Jan 29 11:57:03.122650 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:57:03.124517 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:57:03.179984 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:36134.service - OpenSSH per-connection server daemon (10.0.0.1:36134). Jan 29 11:57:03.214302 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 36134 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:57:03.215876 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:03.220278 systemd-logind[1565]: New session 2 of user core. Jan 29 11:57:03.229926 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:57:03.286503 sshd[1698]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:03.303866 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:36140.service - OpenSSH per-connection server daemon (10.0.0.1:36140). Jan 29 11:57:03.304350 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:36134.service: Deactivated successfully. Jan 29 11:57:03.306535 systemd-logind[1565]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:57:03.307214 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:57:03.308367 systemd-logind[1565]: Removed session 2. Jan 29 11:57:03.338486 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 36140 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:57:03.340395 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:03.345160 systemd-logind[1565]: New session 3 of user core. Jan 29 11:57:03.354903 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:57:03.406219 sshd[1703]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:03.420962 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:36144.service - OpenSSH per-connection server daemon (10.0.0.1:36144). Jan 29 11:57:03.421468 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:36140.service: Deactivated successfully. Jan 29 11:57:03.424381 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:57:03.425444 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:57:03.426469 systemd-logind[1565]: Removed session 3. Jan 29 11:57:03.454399 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 36144 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:57:03.456265 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:03.460698 systemd-logind[1565]: New session 4 of user core. Jan 29 11:57:03.480039 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:57:03.536644 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:03.551024 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:36154.service - OpenSSH per-connection server daemon (10.0.0.1:36154). Jan 29 11:57:03.551719 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:36144.service: Deactivated successfully. Jan 29 11:57:03.554934 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:57:03.556186 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:57:03.557417 systemd-logind[1565]: Removed session 4. Jan 29 11:57:03.586164 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 36154 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:57:03.588022 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:03.592486 systemd-logind[1565]: New session 5 of user core. Jan 29 11:57:03.601987 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:57:03.664565 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:57:03.665062 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:57:03.680160 sudo[1726]: pam_unix(sudo:session): session closed for user root Jan 29 11:57:03.682837 sshd[1719]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:03.696963 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:36158.service - OpenSSH per-connection server daemon (10.0.0.1:36158). Jan 29 11:57:03.697475 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:36154.service: Deactivated successfully. Jan 29 11:57:03.700073 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:57:03.700757 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:57:03.701888 systemd-logind[1565]: Removed session 5. Jan 29 11:57:03.731185 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 36158 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:57:03.733148 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:03.738314 systemd-logind[1565]: New session 6 of user core. Jan 29 11:57:03.747101 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:57:03.804122 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:57:03.804471 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:57:03.808869 sudo[1736]: pam_unix(sudo:session): session closed for user root Jan 29 11:57:03.816711 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 11:57:03.817145 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:57:03.839056 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 11:57:03.840847 auditctl[1739]: No rules Jan 29 11:57:03.841332 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:57:03.841800 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 11:57:03.845443 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 11:57:03.879864 augenrules[1758]: No rules Jan 29 11:57:03.881983 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 11:57:03.883472 sudo[1735]: pam_unix(sudo:session): session closed for user root Jan 29 11:57:03.885546 sshd[1728]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:03.900004 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:36174.service - OpenSSH per-connection server daemon (10.0.0.1:36174). Jan 29 11:57:03.900501 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:36158.service: Deactivated successfully. Jan 29 11:57:03.902947 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:57:03.903882 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:57:03.904750 systemd-logind[1565]: Removed session 6. Jan 29 11:57:03.932023 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 36174 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:57:03.933786 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:57:03.938632 systemd-logind[1565]: New session 7 of user core. Jan 29 11:57:03.954951 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:57:04.010483 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:57:04.010916 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:57:04.593246 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:57:04.593873 (dockerd)[1789]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:57:05.250059 dockerd[1789]: time="2025-01-29T11:57:05.249984725Z" level=info msg="Starting up" Jan 29 11:57:06.184940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:57:06.197817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:57:06.450730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:06.456483 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:57:06.613056 dockerd[1789]: time="2025-01-29T11:57:06.612985782Z" level=info msg="Loading containers: start." Jan 29 11:57:06.624803 kubelet[1825]: E0129 11:57:06.624733 1825 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:57:06.633078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:57:06.633390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:57:06.993637 kernel: Initializing XFRM netlink socket Jan 29 11:57:07.079389 systemd-networkd[1242]: docker0: Link UP Jan 29 11:57:07.326550 dockerd[1789]: time="2025-01-29T11:57:07.326413900Z" level=info msg="Loading containers: done." Jan 29 11:57:07.343494 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1921271185-merged.mount: Deactivated successfully. Jan 29 11:57:07.439122 dockerd[1789]: time="2025-01-29T11:57:07.439002473Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:57:07.439300 dockerd[1789]: time="2025-01-29T11:57:07.439194373Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 11:57:07.439387 dockerd[1789]: time="2025-01-29T11:57:07.439355525Z" level=info msg="Daemon has completed initialization" Jan 29 11:57:07.582087 dockerd[1789]: time="2025-01-29T11:57:07.581753381Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:57:07.581995 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:57:08.433636 containerd[1579]: time="2025-01-29T11:57:08.433274513Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:57:09.137421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287532009.mount: Deactivated successfully. Jan 29 11:57:10.888428 containerd[1579]: time="2025-01-29T11:57:10.888306157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:10.910849 containerd[1579]: time="2025-01-29T11:57:10.910763534Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 11:57:10.929528 containerd[1579]: time="2025-01-29T11:57:10.929453247Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:10.994061 containerd[1579]: time="2025-01-29T11:57:10.993987729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:10.995358 containerd[1579]: time="2025-01-29T11:57:10.995308065Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.561959574s" Jan 29 11:57:10.995358 containerd[1579]: time="2025-01-29T11:57:10.995355083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 11:57:11.044080 containerd[1579]: time="2025-01-29T11:57:11.043878004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:57:13.214760 containerd[1579]: time="2025-01-29T11:57:13.214687812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:13.219270 containerd[1579]: time="2025-01-29T11:57:13.219206624Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 11:57:13.222277 containerd[1579]: time="2025-01-29T11:57:13.222235423Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:13.225496 containerd[1579]: time="2025-01-29T11:57:13.225443238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:13.226514 containerd[1579]: time="2025-01-29T11:57:13.226453703Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 2.182534812s" Jan 29 11:57:13.226514 containerd[1579]: time="2025-01-29T11:57:13.226498306Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 11:57:13.257890 containerd[1579]: time="2025-01-29T11:57:13.257818532Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:57:15.009073 containerd[1579]: time="2025-01-29T11:57:15.008999485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:15.025111 containerd[1579]: time="2025-01-29T11:57:15.025016057Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 11:57:15.030796 containerd[1579]: time="2025-01-29T11:57:15.030737745Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:15.034952 containerd[1579]: time="2025-01-29T11:57:15.034846448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:15.036283 containerd[1579]: time="2025-01-29T11:57:15.036206829Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.778310532s" Jan 29 11:57:15.036353 containerd[1579]: time="2025-01-29T11:57:15.036284234Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 11:57:15.067061 containerd[1579]: time="2025-01-29T11:57:15.066996650Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:57:16.644405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797618137.mount: Deactivated successfully. Jan 29 11:57:16.645751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:57:16.658788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:57:16.814796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:16.822803 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:57:16.932898 kubelet[2064]: E0129 11:57:16.932735 2064 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:57:16.937533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:57:16.937884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:57:18.188470 containerd[1579]: time="2025-01-29T11:57:18.188388459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:18.210719 containerd[1579]: time="2025-01-29T11:57:18.210646042Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 11:57:18.216030 containerd[1579]: time="2025-01-29T11:57:18.215995703Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:18.219565 containerd[1579]: time="2025-01-29T11:57:18.219459978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:18.220326 containerd[1579]: time="2025-01-29T11:57:18.220288471Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 3.153233031s" Jan 29 11:57:18.220326 containerd[1579]: time="2025-01-29T11:57:18.220322115Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:57:18.255490 containerd[1579]: time="2025-01-29T11:57:18.255417829Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:57:19.212490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366764848.mount: Deactivated successfully. Jan 29 11:57:20.022860 containerd[1579]: time="2025-01-29T11:57:20.022799690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:20.024554 containerd[1579]: time="2025-01-29T11:57:20.023594790Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 11:57:20.026521 containerd[1579]: time="2025-01-29T11:57:20.026338044Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:20.029795 containerd[1579]: time="2025-01-29T11:57:20.029755822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:20.031085 containerd[1579]: time="2025-01-29T11:57:20.031036824Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.775567438s" Jan 29 11:57:20.031156 containerd[1579]: time="2025-01-29T11:57:20.031082911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:57:20.053985 containerd[1579]: time="2025-01-29T11:57:20.053942622Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:57:20.554796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1184305093.mount: Deactivated successfully. Jan 29 11:57:20.562536 containerd[1579]: time="2025-01-29T11:57:20.562461786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:20.566521 containerd[1579]: time="2025-01-29T11:57:20.566449292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 11:57:20.568356 containerd[1579]: time="2025-01-29T11:57:20.568306885Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:20.575315 containerd[1579]: time="2025-01-29T11:57:20.575167679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:20.576174 containerd[1579]: time="2025-01-29T11:57:20.576105368Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 522.116709ms" Jan 29 11:57:20.576174 containerd[1579]: time="2025-01-29T11:57:20.576163006Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 11:57:20.604244 containerd[1579]: time="2025-01-29T11:57:20.604194735Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:57:21.313885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3358284823.mount: Deactivated successfully. Jan 29 11:57:23.963275 containerd[1579]: time="2025-01-29T11:57:23.963176565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:23.964378 containerd[1579]: time="2025-01-29T11:57:23.964331611Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 11:57:23.966102 containerd[1579]: time="2025-01-29T11:57:23.966047408Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:23.969848 containerd[1579]: time="2025-01-29T11:57:23.969804282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:23.971226 containerd[1579]: time="2025-01-29T11:57:23.971171726Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.36692281s" Jan 29 11:57:23.971280 containerd[1579]: time="2025-01-29T11:57:23.971231879Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 11:57:26.244230 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:26.257871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:57:26.273135 systemd[1]: Reloading requested from client PID 2274 ('systemctl') (unit session-7.scope)... Jan 29 11:57:26.273152 systemd[1]: Reloading... Jan 29 11:57:26.347787 zram_generator::config[2313]: No configuration found. Jan 29 11:57:26.877219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:57:26.969430 systemd[1]: Reloading finished in 695 ms. Jan 29 11:57:27.025821 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:57:27.026198 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:27.029321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:57:27.173004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:27.178404 (kubelet)[2374]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:57:27.215442 kubelet[2374]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:57:27.215442 kubelet[2374]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:57:27.215442 kubelet[2374]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:57:27.215887 kubelet[2374]: I0129 11:57:27.215475 2374 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:57:27.688242 kubelet[2374]: I0129 11:57:27.688198 2374 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:57:27.688242 kubelet[2374]: I0129 11:57:27.688232 2374 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:57:27.688471 kubelet[2374]: I0129 11:57:27.688445 2374 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:57:27.733116 kubelet[2374]: I0129 11:57:27.733050 2374 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:57:27.740401 kubelet[2374]: E0129 11:57:27.740362 2374 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:27.781291 kubelet[2374]: I0129 11:57:27.781227 2374 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:57:27.785470 kubelet[2374]: I0129 11:57:27.785402 2374 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:57:27.785684 kubelet[2374]: I0129 11:57:27.785456 2374 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:57:27.785780 kubelet[2374]: I0129 11:57:27.785696 2374 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:57:27.785780 kubelet[2374]: I0129 11:57:27.785711 2374 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:57:27.785946 kubelet[2374]: I0129 11:57:27.785908 2374 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:57:27.787116 kubelet[2374]: I0129 11:57:27.787077 2374 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:57:27.787116 kubelet[2374]: I0129 11:57:27.787103 2374 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:57:27.787206 kubelet[2374]: I0129 11:57:27.787148 2374 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:57:27.787206 kubelet[2374]: I0129 11:57:27.787190 2374 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:57:27.787888 kubelet[2374]: W0129 11:57:27.787817 2374 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:27.787937 kubelet[2374]: E0129 11:57:27.787899 2374 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:27.790034 kubelet[2374]: W0129 11:57:27.789940 2374 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:27.790088 kubelet[2374]: E0129 11:57:27.790043 2374 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:27.792993 kubelet[2374]: I0129 11:57:27.792964 2374 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:57:27.795203 kubelet[2374]: I0129 11:57:27.795135 2374 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:57:27.795269 kubelet[2374]: W0129 11:57:27.795241 2374 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:57:27.796312 kubelet[2374]: I0129 11:57:27.796069 2374 server.go:1264] "Started kubelet" Jan 29 11:57:27.796312 kubelet[2374]: I0129 11:57:27.796232 2374 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:57:27.796505 kubelet[2374]: I0129 11:57:27.796374 2374 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:57:27.797278 kubelet[2374]: I0129 11:57:27.796766 2374 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:57:27.797844 kubelet[2374]: I0129 11:57:27.797799 2374 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:57:27.801718 kubelet[2374]: I0129 11:57:27.801506 2374 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:57:27.802689 kubelet[2374]: E0129 11:57:27.802653 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:27.802763 kubelet[2374]: I0129 11:57:27.802716 2374 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:57:27.802850 kubelet[2374]: I0129 11:57:27.802826 2374 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:57:27.802916 kubelet[2374]: I0129 11:57:27.802895 2374 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:57:27.803411 kubelet[2374]: E0129 11:57:27.802740 2374 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f27e57d924acd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:57:27.796034253 +0000 UTC m=+0.613681029,LastTimestamp:2025-01-29 11:57:27.796034253 +0000 UTC m=+0.613681029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:57:27.803411 kubelet[2374]: W0129 11:57:27.803289 2374 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:27.803411 kubelet[2374]: E0129 11:57:27.803329 2374 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:27.803411 kubelet[2374]: E0129 11:57:27.803409 2374 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:57:27.803573 kubelet[2374]: E0129 11:57:27.803462 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Jan 29 11:57:27.804149 kubelet[2374]: I0129 11:57:27.804129 2374 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:57:27.804224 kubelet[2374]: I0129 11:57:27.804214 2374 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:57:27.805396 kubelet[2374]: I0129 11:57:27.805365 2374 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:57:27.824086 kubelet[2374]: I0129 11:57:27.823356 2374 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:57:27.825235 kubelet[2374]: I0129 11:57:27.825175 2374 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:57:27.825235 kubelet[2374]: I0129 11:57:27.825225 2374 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:57:27.825383 kubelet[2374]: I0129 11:57:27.825251 2374 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:57:27.825383 kubelet[2374]: E0129 11:57:27.825300 2374 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:57:27.826779 kubelet[2374]: W0129 11:57:27.826711 2374 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:27.826779 kubelet[2374]: E0129 11:57:27.826752 2374 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:27.834396 kubelet[2374]: I0129 11:57:27.834313 2374 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:57:27.834396 kubelet[2374]: I0129 11:57:27.834336 2374 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:57:27.834396 kubelet[2374]: I0129 11:57:27.834365 2374 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:57:27.907399 kubelet[2374]: I0129 11:57:27.907325 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:57:27.907909 kubelet[2374]: E0129 11:57:27.907857 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 29 11:57:27.926131 kubelet[2374]: E0129 11:57:27.926013 2374 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:57:28.004988 kubelet[2374]: E0129 11:57:28.004849 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Jan 29 11:57:28.109927 kubelet[2374]: I0129 11:57:28.109864 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:57:28.110337 kubelet[2374]: E0129 11:57:28.110291 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 29 11:57:28.126396 kubelet[2374]: E0129 11:57:28.126347 2374 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:57:28.405931 kubelet[2374]: E0129 11:57:28.405777 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Jan 29 11:57:28.512041 kubelet[2374]: I0129 11:57:28.512000 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:57:28.512468 kubelet[2374]: E0129 11:57:28.512435 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 29 11:57:28.526547 kubelet[2374]: E0129 11:57:28.526510 2374 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:57:28.693038 kubelet[2374]: W0129 11:57:28.692806 2374 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:28.693038 kubelet[2374]: E0129 11:57:28.692892 2374 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:28.706770 kubelet[2374]: I0129 11:57:28.706708 2374 policy_none.go:49] "None policy: Start" Jan 29 11:57:28.707890 kubelet[2374]: I0129 11:57:28.707856 2374 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:57:28.707890 kubelet[2374]: I0129 11:57:28.707891 2374 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:57:28.802417 kubelet[2374]: I0129 11:57:28.802370 2374 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:57:28.802653 kubelet[2374]: I0129 11:57:28.802595 2374 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:57:28.803226 kubelet[2374]: I0129 11:57:28.802747 2374 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:57:28.804414 kubelet[2374]: E0129 11:57:28.804388 2374 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:57:28.876794 kubelet[2374]: W0129 11:57:28.876701 2374 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:28.876794 kubelet[2374]: E0129 11:57:28.876791 2374 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:29.059487 kubelet[2374]: W0129 11:57:29.059249 2374 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:29.059487 kubelet[2374]: E0129 11:57:29.059366 2374 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:29.206695 kubelet[2374]: E0129 11:57:29.206593 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="1.6s" Jan 29 11:57:29.262829 kubelet[2374]: W0129 11:57:29.262773 2374 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:29.262829 kubelet[2374]: E0129 11:57:29.262818 2374 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:29.314978 kubelet[2374]: I0129 11:57:29.314811 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:57:29.315331 kubelet[2374]: E0129 11:57:29.315268 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 29 11:57:29.327682 kubelet[2374]: I0129 11:57:29.327591 2374 topology_manager.go:215] "Topology Admit Handler" podUID="2ea11ed5d3e6fdaae034004777746334" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:57:29.328950 kubelet[2374]: I0129 11:57:29.328906 2374 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:57:29.329679 kubelet[2374]: I0129 11:57:29.329646 2374 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:57:29.413674 kubelet[2374]: I0129 11:57:29.413590 2374 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:57:29.413674 kubelet[2374]: I0129 11:57:29.413661 2374 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:57:29.413674 kubelet[2374]: I0129 11:57:29.413681 2374 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:57:29.414304 kubelet[2374]: I0129 11:57:29.413698 2374 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:57:29.414304 kubelet[2374]: I0129 11:57:29.413712 2374 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:57:29.414304 kubelet[2374]: I0129 11:57:29.413728 2374 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:57:29.414304 kubelet[2374]: I0129 11:57:29.413743 2374 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:57:29.414304 kubelet[2374]: I0129 11:57:29.413756 2374 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:57:29.414481 kubelet[2374]: I0129 11:57:29.413769 2374 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:57:29.634086 kubelet[2374]: E0129 11:57:29.633902 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:29.635026 kubelet[2374]: E0129 11:57:29.634812 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:29.635181 containerd[1579]: time="2025-01-29T11:57:29.634726858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ea11ed5d3e6fdaae034004777746334,Namespace:kube-system,Attempt:0,}" Jan 29 11:57:29.635642 containerd[1579]: time="2025-01-29T11:57:29.635332680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 11:57:29.636523 kubelet[2374]: E0129 11:57:29.636491 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:29.636953 containerd[1579]: time="2025-01-29T11:57:29.636906859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 11:57:29.856755 kubelet[2374]: E0129 11:57:29.856711 2374 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:30.495247 kubelet[2374]: W0129 11:57:30.495197 2374 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:30.495247 kubelet[2374]: E0129 11:57:30.495248 2374 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:30.808281 kubelet[2374]: E0129 11:57:30.808069 2374 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="3.2s" Jan 29 11:57:30.917397 kubelet[2374]: I0129 11:57:30.917314 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:57:30.917728 kubelet[2374]: E0129 11:57:30.917704 2374 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Jan 29 11:57:31.067406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392544083.mount: Deactivated successfully. Jan 29 11:57:31.072611 containerd[1579]: time="2025-01-29T11:57:31.072554102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:57:31.074520 containerd[1579]: time="2025-01-29T11:57:31.074477400Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:57:31.075660 containerd[1579]: time="2025-01-29T11:57:31.075588455Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:57:31.076726 containerd[1579]: time="2025-01-29T11:57:31.076697125Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:57:31.077737 containerd[1579]: time="2025-01-29T11:57:31.077695905Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:57:31.078088 containerd[1579]: time="2025-01-29T11:57:31.078048079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:57:31.079077 containerd[1579]: time="2025-01-29T11:57:31.079014006Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:57:31.081758 containerd[1579]: time="2025-01-29T11:57:31.081726333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:57:31.083387 containerd[1579]: time="2025-01-29T11:57:31.083362101Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.448412116s" Jan 29 11:57:31.084029 containerd[1579]: time="2025-01-29T11:57:31.083996505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.447017718s" Jan 29 11:57:31.084834 containerd[1579]: time="2025-01-29T11:57:31.084787548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.449354725s" Jan 29 11:57:31.365781 containerd[1579]: time="2025-01-29T11:57:31.363154151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:31.365781 containerd[1579]: time="2025-01-29T11:57:31.363423166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:31.365781 containerd[1579]: time="2025-01-29T11:57:31.363454405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:31.365781 containerd[1579]: time="2025-01-29T11:57:31.363695316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:31.377854 containerd[1579]: time="2025-01-29T11:57:31.377685801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:31.377854 containerd[1579]: time="2025-01-29T11:57:31.377757599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:31.377854 containerd[1579]: time="2025-01-29T11:57:31.377773609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:31.378034 containerd[1579]: time="2025-01-29T11:57:31.377894931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:31.382504 containerd[1579]: time="2025-01-29T11:57:31.382169456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:31.382504 containerd[1579]: time="2025-01-29T11:57:31.382268726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:31.382504 containerd[1579]: time="2025-01-29T11:57:31.382391631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:31.382663 containerd[1579]: time="2025-01-29T11:57:31.382636700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:31.494118 containerd[1579]: time="2025-01-29T11:57:31.494073153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2ea11ed5d3e6fdaae034004777746334,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fe82942faa77d58b0eeae48350584537065bd167cf3e312ce598e782a4730d5\"" Jan 29 11:57:31.497653 kubelet[2374]: E0129 11:57:31.497631 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:31.501564 containerd[1579]: time="2025-01-29T11:57:31.501520866Z" level=info msg="CreateContainer within sandbox \"0fe82942faa77d58b0eeae48350584537065bd167cf3e312ce598e782a4730d5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:57:31.509868 containerd[1579]: time="2025-01-29T11:57:31.509809256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"96ed083448547c04696897a48cb74ca37f2ceb8b0d55d215586135cd4538d22c\"" Jan 29 11:57:31.510680 kubelet[2374]: E0129 11:57:31.510647 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:31.513716 containerd[1579]: time="2025-01-29T11:57:31.513577373Z" level=info msg="CreateContainer within sandbox \"96ed083448547c04696897a48cb74ca37f2ceb8b0d55d215586135cd4538d22c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:57:31.519111 containerd[1579]: time="2025-01-29T11:57:31.519072953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4553567aa985ed3794280d62aefe7e99c6938fb75b6c5c10004bd26f469b558c\"" Jan 29 11:57:31.520296 kubelet[2374]: E0129 11:57:31.520261 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:31.522584 containerd[1579]: time="2025-01-29T11:57:31.522541947Z" level=info msg="CreateContainer within sandbox \"4553567aa985ed3794280d62aefe7e99c6938fb75b6c5c10004bd26f469b558c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:57:31.553630 containerd[1579]: time="2025-01-29T11:57:31.553542433Z" level=info msg="CreateContainer within sandbox \"0fe82942faa77d58b0eeae48350584537065bd167cf3e312ce598e782a4730d5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"edf61e2459522db70862038bc814fa3ff3a01c8f48b3da844040f68c93a4c239\"" Jan 29 11:57:31.554502 containerd[1579]: time="2025-01-29T11:57:31.554453115Z" level=info msg="StartContainer for \"edf61e2459522db70862038bc814fa3ff3a01c8f48b3da844040f68c93a4c239\"" Jan 29 11:57:31.563771 containerd[1579]: time="2025-01-29T11:57:31.563661204Z" level=info msg="CreateContainer within sandbox \"96ed083448547c04696897a48cb74ca37f2ceb8b0d55d215586135cd4538d22c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38927944cb268dbc53504b26db53e184f218060912374da4ecf6ef4e0202117a\"" Jan 29 11:57:31.564732 containerd[1579]: time="2025-01-29T11:57:31.564687938Z" level=info msg="StartContainer for \"38927944cb268dbc53504b26db53e184f218060912374da4ecf6ef4e0202117a\"" Jan 29 11:57:31.566408 containerd[1579]: time="2025-01-29T11:57:31.566347792Z" level=info msg="CreateContainer within sandbox \"4553567aa985ed3794280d62aefe7e99c6938fb75b6c5c10004bd26f469b558c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8b8f3f42a087df16e100533ec3b0fa011b0de10e5974b0d1565d05c551d3623a\"" Jan 29 11:57:31.567036 containerd[1579]: time="2025-01-29T11:57:31.567005420Z" level=info msg="StartContainer for \"8b8f3f42a087df16e100533ec3b0fa011b0de10e5974b0d1565d05c551d3623a\"" Jan 29 11:57:31.690781 containerd[1579]: time="2025-01-29T11:57:31.689877312Z" level=info msg="StartContainer for \"edf61e2459522db70862038bc814fa3ff3a01c8f48b3da844040f68c93a4c239\" returns successfully" Jan 29 11:57:31.690781 containerd[1579]: time="2025-01-29T11:57:31.689974228Z" level=info msg="StartContainer for \"38927944cb268dbc53504b26db53e184f218060912374da4ecf6ef4e0202117a\" returns successfully" Jan 29 11:57:31.690781 containerd[1579]: time="2025-01-29T11:57:31.690027049Z" level=info msg="StartContainer for \"8b8f3f42a087df16e100533ec3b0fa011b0de10e5974b0d1565d05c551d3623a\" returns successfully" Jan 29 11:57:31.741635 kubelet[2374]: W0129 11:57:31.741411 2374 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:31.741635 kubelet[2374]: E0129 11:57:31.741501 2374 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Jan 29 11:57:31.835503 kubelet[2374]: E0129 11:57:31.835461 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:31.837305 kubelet[2374]: E0129 11:57:31.837281 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:31.839462 kubelet[2374]: E0129 11:57:31.839432 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:32.841546 kubelet[2374]: E0129 11:57:32.841505 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:33.239698 kubelet[2374]: E0129 11:57:33.239554 2374 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 11:57:33.594916 kubelet[2374]: E0129 11:57:33.594772 2374 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 11:57:34.028163 kubelet[2374]: E0129 11:57:34.027921 2374 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:57:34.119164 kubelet[2374]: I0129 11:57:34.119110 2374 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:57:34.127349 kubelet[2374]: I0129 11:57:34.127311 2374 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:57:34.133695 kubelet[2374]: E0129 11:57:34.133665 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:34.204478 kubelet[2374]: E0129 11:57:34.204428 2374 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:34.233790 kubelet[2374]: E0129 11:57:34.233740 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:34.334049 kubelet[2374]: E0129 11:57:34.333904 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:34.434661 kubelet[2374]: E0129 11:57:34.434583 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:34.535230 kubelet[2374]: E0129 11:57:34.535152 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:34.636004 kubelet[2374]: E0129 11:57:34.635835 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:34.736561 kubelet[2374]: E0129 11:57:34.736505 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:34.837676 kubelet[2374]: E0129 11:57:34.837577 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:34.938240 kubelet[2374]: E0129 11:57:34.938061 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:35.038813 kubelet[2374]: E0129 11:57:35.038739 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:35.139435 kubelet[2374]: E0129 11:57:35.139375 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:35.155901 systemd[1]: Reloading requested from client PID 2650 ('systemctl') (unit session-7.scope)... Jan 29 11:57:35.155917 systemd[1]: Reloading... Jan 29 11:57:35.237722 zram_generator::config[2689]: No configuration found. Jan 29 11:57:35.239977 kubelet[2374]: E0129 11:57:35.239941 2374 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:57:35.366448 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:57:35.451182 systemd[1]: Reloading finished in 294 ms. Jan 29 11:57:35.487788 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:57:35.487942 kubelet[2374]: I0129 11:57:35.487798 2374 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:57:35.504252 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:57:35.504741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:35.515803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:57:35.660571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:57:35.666177 (kubelet)[2744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:57:35.726810 kubelet[2744]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:57:35.726810 kubelet[2744]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:57:35.726810 kubelet[2744]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:57:35.727247 kubelet[2744]: I0129 11:57:35.726854 2744 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:57:35.731488 kubelet[2744]: I0129 11:57:35.731449 2744 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:57:35.731488 kubelet[2744]: I0129 11:57:35.731476 2744 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:57:35.731745 kubelet[2744]: I0129 11:57:35.731724 2744 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:57:35.732955 kubelet[2744]: I0129 11:57:35.732926 2744 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:57:35.735728 kubelet[2744]: I0129 11:57:35.735671 2744 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:57:35.744010 kubelet[2744]: I0129 11:57:35.743980 2744 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:57:35.744664 kubelet[2744]: I0129 11:57:35.744623 2744 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:57:35.744861 kubelet[2744]: I0129 11:57:35.744662 2744 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:57:35.744946 kubelet[2744]: I0129 11:57:35.744882 2744 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:57:35.744946 kubelet[2744]: I0129 11:57:35.744894 2744 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:57:35.744985 kubelet[2744]: I0129 11:57:35.744954 2744 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:57:35.745070 kubelet[2744]: I0129 11:57:35.745054 2744 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:57:35.745070 kubelet[2744]: I0129 11:57:35.745068 2744 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:57:35.745121 kubelet[2744]: I0129 11:57:35.745096 2744 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:57:35.745121 kubelet[2744]: I0129 11:57:35.745114 2744 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:57:35.747735 kubelet[2744]: I0129 11:57:35.747698 2744 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 11:57:35.747929 kubelet[2744]: I0129 11:57:35.747899 2744 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:57:35.748655 kubelet[2744]: I0129 11:57:35.748390 2744 server.go:1264] "Started kubelet" Jan 29 11:57:35.750778 kubelet[2744]: I0129 11:57:35.750293 2744 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:57:35.750778 kubelet[2744]: I0129 11:57:35.750304 2744 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:57:35.754018 kubelet[2744]: I0129 11:57:35.753988 2744 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:57:35.759240 kubelet[2744]: I0129 11:57:35.750253 2744 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:57:35.759240 kubelet[2744]: I0129 11:57:35.758545 2744 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:57:35.759240 kubelet[2744]: I0129 11:57:35.758697 2744 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:57:35.759240 kubelet[2744]: I0129 11:57:35.758962 2744 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:57:35.759400 kubelet[2744]: I0129 11:57:35.759318 2744 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:57:35.763848 kubelet[2744]: I0129 11:57:35.763759 2744 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:57:35.763926 kubelet[2744]: I0129 11:57:35.763880 2744 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:57:35.767130 kubelet[2744]: E0129 11:57:35.766292 2744 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:57:35.767130 kubelet[2744]: I0129 11:57:35.766538 2744 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:57:35.771435 kubelet[2744]: I0129 11:57:35.771390 2744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:57:35.772989 kubelet[2744]: I0129 11:57:35.772735 2744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:57:35.772989 kubelet[2744]: I0129 11:57:35.772763 2744 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:57:35.772989 kubelet[2744]: I0129 11:57:35.772781 2744 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:57:35.772989 kubelet[2744]: E0129 11:57:35.772823 2744 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:57:35.830162 kubelet[2744]: I0129 11:57:35.830075 2744 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:57:35.830162 kubelet[2744]: I0129 11:57:35.830094 2744 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:57:35.830162 kubelet[2744]: I0129 11:57:35.830113 2744 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:57:35.830394 kubelet[2744]: I0129 11:57:35.830305 2744 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:57:35.830394 kubelet[2744]: I0129 11:57:35.830318 2744 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:57:35.830394 kubelet[2744]: I0129 11:57:35.830341 2744 policy_none.go:49] "None policy: Start" Jan 29 11:57:35.831024 kubelet[2744]: I0129 11:57:35.831009 2744 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:57:35.831071 kubelet[2744]: I0129 11:57:35.831030 2744 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:57:35.831184 kubelet[2744]: I0129 11:57:35.831171 2744 state_mem.go:75] "Updated machine memory state" Jan 29 11:57:35.832796 kubelet[2744]: I0129 11:57:35.832766 2744 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:57:35.832979 kubelet[2744]: I0129 11:57:35.832946 2744 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:57:35.833243 kubelet[2744]: I0129 11:57:35.833051 2744 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:57:35.864512 kubelet[2744]: I0129 11:57:35.864472 2744 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:57:35.873745 kubelet[2744]: I0129 11:57:35.873693 2744 topology_manager.go:215] "Topology Admit Handler" podUID="2ea11ed5d3e6fdaae034004777746334" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:57:35.873860 kubelet[2744]: I0129 11:57:35.873802 2744 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:57:35.873860 kubelet[2744]: I0129 11:57:35.873850 2744 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:57:36.061153 kubelet[2744]: I0129 11:57:36.060995 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:57:36.061153 kubelet[2744]: I0129 11:57:36.061039 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:57:36.061153 kubelet[2744]: I0129 11:57:36.061069 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:57:36.061153 kubelet[2744]: I0129 11:57:36.061102 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:57:36.061153 kubelet[2744]: I0129 11:57:36.061121 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:57:36.061424 kubelet[2744]: I0129 11:57:36.061141 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:57:36.061424 kubelet[2744]: I0129 11:57:36.061162 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:57:36.061424 kubelet[2744]: I0129 11:57:36.061182 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:57:36.061424 kubelet[2744]: I0129 11:57:36.061202 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ea11ed5d3e6fdaae034004777746334-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2ea11ed5d3e6fdaae034004777746334\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:57:36.069616 kubelet[2744]: I0129 11:57:36.069566 2744 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 11:57:36.069713 kubelet[2744]: I0129 11:57:36.069701 2744 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:57:36.330872 kubelet[2744]: E0129 11:57:36.330686 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:36.330872 kubelet[2744]: E0129 11:57:36.330783 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:36.331041 kubelet[2744]: E0129 11:57:36.331005 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:36.746195 kubelet[2744]: I0129 11:57:36.746069 2744 apiserver.go:52] "Watching apiserver" Jan 29 11:57:37.454054 kubelet[2744]: I0129 11:57:37.453634 2744 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:57:37.454054 kubelet[2744]: E0129 11:57:37.453712 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:37.454054 kubelet[2744]: E0129 11:57:37.453826 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:37.454354 kubelet[2744]: E0129 11:57:37.454333 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:37.569376 kubelet[2744]: I0129 11:57:37.569266 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.569248944 podStartE2EDuration="2.569248944s" podCreationTimestamp="2025-01-29 11:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:57:37.555483154 +0000 UTC m=+1.885130427" watchObservedRunningTime="2025-01-29 11:57:37.569248944 +0000 UTC m=+1.898896227" Jan 29 11:57:37.666880 kubelet[2744]: I0129 11:57:37.666796 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.666771766 podStartE2EDuration="2.666771766s" podCreationTimestamp="2025-01-29 11:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:57:37.666444184 +0000 UTC m=+1.996091467" watchObservedRunningTime="2025-01-29 11:57:37.666771766 +0000 UTC m=+1.996419049" Jan 29 11:57:37.667068 kubelet[2744]: I0129 11:57:37.666936 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.666932111 podStartE2EDuration="2.666932111s" podCreationTimestamp="2025-01-29 11:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:57:37.655035944 +0000 UTC m=+1.984683227" watchObservedRunningTime="2025-01-29 11:57:37.666932111 +0000 UTC m=+1.996579394" Jan 29 11:57:37.742741 systemd-resolved[1455]: Under memory pressure, flushing caches. Jan 29 11:57:37.744958 systemd-journald[1154]: Under memory pressure, flushing caches. Jan 29 11:57:37.742787 systemd-resolved[1455]: Flushed all caches. Jan 29 11:57:37.821957 kubelet[2744]: E0129 11:57:37.821900 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:38.688715 kubelet[2744]: E0129 11:57:38.688639 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:39.887762 update_engine[1566]: I20250129 11:57:39.887674 1566 update_attempter.cc:509] Updating boot flags... Jan 29 11:57:39.918644 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2818) Jan 29 11:57:39.973816 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2820) Jan 29 11:57:40.308049 kubelet[2744]: E0129 11:57:40.307983 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:41.700777 sudo[1771]: pam_unix(sudo:session): session closed for user root Jan 29 11:57:41.703452 sshd[1764]: pam_unix(sshd:session): session closed for user core Jan 29 11:57:41.708540 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:36174.service: Deactivated successfully. Jan 29 11:57:41.711170 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:57:41.711948 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:57:41.713192 systemd-logind[1565]: Removed session 7. Jan 29 11:57:46.268203 kubelet[2744]: E0129 11:57:46.268165 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:46.820489 kubelet[2744]: E0129 11:57:46.820445 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:47.821692 kubelet[2744]: E0129 11:57:47.821651 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:48.692648 kubelet[2744]: E0129 11:57:48.692561 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:50.319333 kubelet[2744]: E0129 11:57:50.313672 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:50.376829 kubelet[2744]: I0129 11:57:50.376787 2744 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:57:50.377293 containerd[1579]: time="2025-01-29T11:57:50.377207701Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:57:50.377645 kubelet[2744]: I0129 11:57:50.377362 2744 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:57:51.459644 kubelet[2744]: I0129 11:57:51.457084 2744 topology_manager.go:215] "Topology Admit Handler" podUID="5f15b396-bc12-4eec-9023-5f41327fc1fb" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-7sbvt" Jan 29 11:57:51.475320 kubelet[2744]: I0129 11:57:51.475257 2744 topology_manager.go:215] "Topology Admit Handler" podUID="0d74c1b3-5e3c-4954-8c7f-b2d95058bd38" podNamespace="kube-system" podName="kube-proxy-9bhsz" Jan 29 11:57:51.545690 kubelet[2744]: I0129 11:57:51.545647 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f15b396-bc12-4eec-9023-5f41327fc1fb-var-lib-calico\") pod \"tigera-operator-7bc55997bb-7sbvt\" (UID: \"5f15b396-bc12-4eec-9023-5f41327fc1fb\") " pod="tigera-operator/tigera-operator-7bc55997bb-7sbvt" Jan 29 11:57:51.545690 kubelet[2744]: I0129 11:57:51.545688 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0d74c1b3-5e3c-4954-8c7f-b2d95058bd38-kube-proxy\") pod \"kube-proxy-9bhsz\" (UID: \"0d74c1b3-5e3c-4954-8c7f-b2d95058bd38\") " pod="kube-system/kube-proxy-9bhsz" Jan 29 11:57:51.545690 kubelet[2744]: I0129 11:57:51.545712 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d74c1b3-5e3c-4954-8c7f-b2d95058bd38-xtables-lock\") pod \"kube-proxy-9bhsz\" (UID: \"0d74c1b3-5e3c-4954-8c7f-b2d95058bd38\") " pod="kube-system/kube-proxy-9bhsz" Jan 29 11:57:51.545978 kubelet[2744]: I0129 11:57:51.545731 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d74c1b3-5e3c-4954-8c7f-b2d95058bd38-lib-modules\") pod \"kube-proxy-9bhsz\" (UID: \"0d74c1b3-5e3c-4954-8c7f-b2d95058bd38\") " pod="kube-system/kube-proxy-9bhsz" Jan 29 11:57:51.545978 kubelet[2744]: I0129 11:57:51.545769 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brpkc\" (UniqueName: \"kubernetes.io/projected/5f15b396-bc12-4eec-9023-5f41327fc1fb-kube-api-access-brpkc\") pod \"tigera-operator-7bc55997bb-7sbvt\" (UID: \"5f15b396-bc12-4eec-9023-5f41327fc1fb\") " pod="tigera-operator/tigera-operator-7bc55997bb-7sbvt" Jan 29 11:57:51.545978 kubelet[2744]: I0129 11:57:51.545796 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4hkj\" (UniqueName: \"kubernetes.io/projected/0d74c1b3-5e3c-4954-8c7f-b2d95058bd38-kube-api-access-n4hkj\") pod \"kube-proxy-9bhsz\" (UID: \"0d74c1b3-5e3c-4954-8c7f-b2d95058bd38\") " pod="kube-system/kube-proxy-9bhsz" Jan 29 11:57:51.763362 containerd[1579]: time="2025-01-29T11:57:51.763291993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-7sbvt,Uid:5f15b396-bc12-4eec-9023-5f41327fc1fb,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:57:51.781033 kubelet[2744]: E0129 11:57:51.780999 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:51.782639 containerd[1579]: time="2025-01-29T11:57:51.781328988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9bhsz,Uid:0d74c1b3-5e3c-4954-8c7f-b2d95058bd38,Namespace:kube-system,Attempt:0,}" Jan 29 11:57:51.799561 containerd[1579]: time="2025-01-29T11:57:51.799250075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:51.799561 containerd[1579]: time="2025-01-29T11:57:51.799367295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:51.799561 containerd[1579]: time="2025-01-29T11:57:51.799392543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:51.800258 containerd[1579]: time="2025-01-29T11:57:51.800185297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:51.814455 containerd[1579]: time="2025-01-29T11:57:51.814028496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:51.814455 containerd[1579]: time="2025-01-29T11:57:51.814153211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:51.814455 containerd[1579]: time="2025-01-29T11:57:51.814174001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:51.814455 containerd[1579]: time="2025-01-29T11:57:51.814295740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:51.851374 containerd[1579]: time="2025-01-29T11:57:51.851168556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9bhsz,Uid:0d74c1b3-5e3c-4954-8c7f-b2d95058bd38,Namespace:kube-system,Attempt:0,} returns sandbox id \"9847d68b2eed3002f4bdf6b0afed63ca130dcb295b9720138930f2bfc3bc3aef\"" Jan 29 11:57:51.853160 kubelet[2744]: E0129 11:57:51.853136 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:51.857240 containerd[1579]: time="2025-01-29T11:57:51.857192252Z" level=info msg="CreateContainer within sandbox \"9847d68b2eed3002f4bdf6b0afed63ca130dcb295b9720138930f2bfc3bc3aef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:57:51.861961 containerd[1579]: time="2025-01-29T11:57:51.861913764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-7sbvt,Uid:5f15b396-bc12-4eec-9023-5f41327fc1fb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d2208278b0d57495fdcc731377531ce9658d22b9b7340b5babe9b6350077b9ab\"" Jan 29 11:57:51.863630 containerd[1579]: time="2025-01-29T11:57:51.863580095Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:57:51.875939 containerd[1579]: time="2025-01-29T11:57:51.875902717Z" level=info msg="CreateContainer within sandbox \"9847d68b2eed3002f4bdf6b0afed63ca130dcb295b9720138930f2bfc3bc3aef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84b2d7ac9a0132a6bf4fc0304774089479ae99e393ee04034c530fe59dc014bc\"" Jan 29 11:57:51.876454 containerd[1579]: time="2025-01-29T11:57:51.876403050Z" level=info msg="StartContainer for \"84b2d7ac9a0132a6bf4fc0304774089479ae99e393ee04034c530fe59dc014bc\"" Jan 29 11:57:51.943945 containerd[1579]: time="2025-01-29T11:57:51.943554330Z" level=info msg="StartContainer for \"84b2d7ac9a0132a6bf4fc0304774089479ae99e393ee04034c530fe59dc014bc\" returns successfully" Jan 29 11:57:52.829145 kubelet[2744]: E0129 11:57:52.829100 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:53.112503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1864809154.mount: Deactivated successfully. Jan 29 11:57:54.020634 containerd[1579]: time="2025-01-29T11:57:54.020540151Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:54.021656 containerd[1579]: time="2025-01-29T11:57:54.021598554Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:57:54.022880 containerd[1579]: time="2025-01-29T11:57:54.022850352Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:54.025031 containerd[1579]: time="2025-01-29T11:57:54.024981085Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:54.025618 containerd[1579]: time="2025-01-29T11:57:54.025565676Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.16192647s" Jan 29 11:57:54.025618 containerd[1579]: time="2025-01-29T11:57:54.025594170Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:57:54.027873 containerd[1579]: time="2025-01-29T11:57:54.027843516Z" level=info msg="CreateContainer within sandbox \"d2208278b0d57495fdcc731377531ce9658d22b9b7340b5babe9b6350077b9ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:57:54.073415 containerd[1579]: time="2025-01-29T11:57:54.073341080Z" level=info msg="CreateContainer within sandbox \"d2208278b0d57495fdcc731377531ce9658d22b9b7340b5babe9b6350077b9ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"398560295b351d443876c579a1c20adb884324242bdd1f1f8c0dbb59f440a4b0\"" Jan 29 11:57:54.074209 containerd[1579]: time="2025-01-29T11:57:54.073992567Z" level=info msg="StartContainer for \"398560295b351d443876c579a1c20adb884324242bdd1f1f8c0dbb59f440a4b0\"" Jan 29 11:57:54.192856 containerd[1579]: time="2025-01-29T11:57:54.192783899Z" level=info msg="StartContainer for \"398560295b351d443876c579a1c20adb884324242bdd1f1f8c0dbb59f440a4b0\" returns successfully" Jan 29 11:57:54.843711 kubelet[2744]: I0129 11:57:54.843643 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-7sbvt" podStartSLOduration=1.680127445 podStartE2EDuration="3.843623613s" podCreationTimestamp="2025-01-29 11:57:51 +0000 UTC" firstStartedPulling="2025-01-29 11:57:51.863030288 +0000 UTC m=+16.192677571" lastFinishedPulling="2025-01-29 11:57:54.026526456 +0000 UTC m=+18.356173739" observedRunningTime="2025-01-29 11:57:54.843458943 +0000 UTC m=+19.173106236" watchObservedRunningTime="2025-01-29 11:57:54.843623613 +0000 UTC m=+19.173270896" Jan 29 11:57:54.844357 kubelet[2744]: I0129 11:57:54.843755 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9bhsz" podStartSLOduration=3.843748037 podStartE2EDuration="3.843748037s" podCreationTimestamp="2025-01-29 11:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:57:52.837183214 +0000 UTC m=+17.166830497" watchObservedRunningTime="2025-01-29 11:57:54.843748037 +0000 UTC m=+19.173395320" Jan 29 11:57:57.052988 kubelet[2744]: I0129 11:57:57.052936 2744 topology_manager.go:215] "Topology Admit Handler" podUID="e8299335-919e-4583-b73d-07cf47fb238c" podNamespace="calico-system" podName="calico-typha-55488b7964-gsdtk" Jan 29 11:57:57.079107 kubelet[2744]: I0129 11:57:57.078029 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74qx5\" (UniqueName: \"kubernetes.io/projected/e8299335-919e-4583-b73d-07cf47fb238c-kube-api-access-74qx5\") pod \"calico-typha-55488b7964-gsdtk\" (UID: \"e8299335-919e-4583-b73d-07cf47fb238c\") " pod="calico-system/calico-typha-55488b7964-gsdtk" Jan 29 11:57:57.079107 kubelet[2744]: I0129 11:57:57.078070 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e8299335-919e-4583-b73d-07cf47fb238c-typha-certs\") pod \"calico-typha-55488b7964-gsdtk\" (UID: \"e8299335-919e-4583-b73d-07cf47fb238c\") " pod="calico-system/calico-typha-55488b7964-gsdtk" Jan 29 11:57:57.079107 kubelet[2744]: I0129 11:57:57.078087 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8299335-919e-4583-b73d-07cf47fb238c-tigera-ca-bundle\") pod \"calico-typha-55488b7964-gsdtk\" (UID: \"e8299335-919e-4583-b73d-07cf47fb238c\") " pod="calico-system/calico-typha-55488b7964-gsdtk" Jan 29 11:57:57.131461 kubelet[2744]: I0129 11:57:57.130408 2744 topology_manager.go:215] "Topology Admit Handler" podUID="44c639fd-53e4-475a-a83b-47a1d4d75ff1" podNamespace="calico-system" podName="calico-node-55n9x" Jan 29 11:57:57.178894 kubelet[2744]: I0129 11:57:57.178834 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/44c639fd-53e4-475a-a83b-47a1d4d75ff1-policysync\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.178894 kubelet[2744]: I0129 11:57:57.178893 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44c639fd-53e4-475a-a83b-47a1d4d75ff1-tigera-ca-bundle\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.179109 kubelet[2744]: I0129 11:57:57.178916 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/44c639fd-53e4-475a-a83b-47a1d4d75ff1-cni-net-dir\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.179109 kubelet[2744]: I0129 11:57:57.178942 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56p9\" (UniqueName: \"kubernetes.io/projected/44c639fd-53e4-475a-a83b-47a1d4d75ff1-kube-api-access-k56p9\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.179109 kubelet[2744]: I0129 11:57:57.178963 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44c639fd-53e4-475a-a83b-47a1d4d75ff1-lib-modules\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.179109 kubelet[2744]: I0129 11:57:57.178981 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44c639fd-53e4-475a-a83b-47a1d4d75ff1-xtables-lock\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.179109 kubelet[2744]: I0129 11:57:57.179013 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/44c639fd-53e4-475a-a83b-47a1d4d75ff1-var-lib-calico\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.179231 kubelet[2744]: I0129 11:57:57.179031 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/44c639fd-53e4-475a-a83b-47a1d4d75ff1-cni-bin-dir\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.179231 kubelet[2744]: I0129 11:57:57.179048 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/44c639fd-53e4-475a-a83b-47a1d4d75ff1-flexvol-driver-host\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.179231 kubelet[2744]: I0129 11:57:57.179067 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/44c639fd-53e4-475a-a83b-47a1d4d75ff1-cni-log-dir\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.179231 kubelet[2744]: I0129 11:57:57.179110 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/44c639fd-53e4-475a-a83b-47a1d4d75ff1-var-run-calico\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.179231 kubelet[2744]: I0129 11:57:57.179132 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/44c639fd-53e4-475a-a83b-47a1d4d75ff1-node-certs\") pod \"calico-node-55n9x\" (UID: \"44c639fd-53e4-475a-a83b-47a1d4d75ff1\") " pod="calico-system/calico-node-55n9x" Jan 29 11:57:57.282392 kubelet[2744]: E0129 11:57:57.282349 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.282392 kubelet[2744]: W0129 11:57:57.282378 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.282392 kubelet[2744]: E0129 11:57:57.282404 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.283734 kubelet[2744]: E0129 11:57:57.283712 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.283802 kubelet[2744]: W0129 11:57:57.283733 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.283802 kubelet[2744]: E0129 11:57:57.283753 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.289429 kubelet[2744]: E0129 11:57:57.289409 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.289429 kubelet[2744]: W0129 11:57:57.289426 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.289539 kubelet[2744]: E0129 11:57:57.289442 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.289671 kubelet[2744]: E0129 11:57:57.289658 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.289671 kubelet[2744]: W0129 11:57:57.289668 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.289756 kubelet[2744]: E0129 11:57:57.289677 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.289879 kubelet[2744]: E0129 11:57:57.289845 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.289879 kubelet[2744]: W0129 11:57:57.289858 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.289879 kubelet[2744]: E0129 11:57:57.289866 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.290118 kubelet[2744]: E0129 11:57:57.290085 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.290118 kubelet[2744]: W0129 11:57:57.290109 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.290255 kubelet[2744]: E0129 11:57:57.290131 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.290350 kubelet[2744]: E0129 11:57:57.290328 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.290350 kubelet[2744]: W0129 11:57:57.290342 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.290415 kubelet[2744]: E0129 11:57:57.290351 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.290556 kubelet[2744]: E0129 11:57:57.290530 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.290556 kubelet[2744]: W0129 11:57:57.290547 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.290619 kubelet[2744]: E0129 11:57:57.290577 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.290787 kubelet[2744]: E0129 11:57:57.290772 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.290787 kubelet[2744]: W0129 11:57:57.290785 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.290855 kubelet[2744]: E0129 11:57:57.290795 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.291013 kubelet[2744]: E0129 11:57:57.290999 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.291013 kubelet[2744]: W0129 11:57:57.291011 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.291081 kubelet[2744]: E0129 11:57:57.291022 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.324078 kubelet[2744]: E0129 11:57:57.323974 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.324078 kubelet[2744]: W0129 11:57:57.324001 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.324078 kubelet[2744]: E0129 11:57:57.324025 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.356255 kubelet[2744]: I0129 11:57:57.356208 2744 topology_manager.go:215] "Topology Admit Handler" podUID="4b8569b7-17f3-41f5-af84-56efb8c2c37a" podNamespace="calico-system" podName="csi-node-driver-dmc9g" Jan 29 11:57:57.356913 kubelet[2744]: E0129 11:57:57.356513 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dmc9g" podUID="4b8569b7-17f3-41f5-af84-56efb8c2c37a" Jan 29 11:57:57.366638 kubelet[2744]: E0129 11:57:57.366494 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:57.367876 containerd[1579]: time="2025-01-29T11:57:57.367451530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55488b7964-gsdtk,Uid:e8299335-919e-4583-b73d-07cf47fb238c,Namespace:calico-system,Attempt:0,}" Jan 29 11:57:57.379319 kubelet[2744]: E0129 11:57:57.379287 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.379319 kubelet[2744]: W0129 11:57:57.379311 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.379319 kubelet[2744]: E0129 11:57:57.379332 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.379542 kubelet[2744]: E0129 11:57:57.379528 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.379568 kubelet[2744]: W0129 11:57:57.379541 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.379568 kubelet[2744]: E0129 11:57:57.379554 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.379826 kubelet[2744]: E0129 11:57:57.379814 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.379826 kubelet[2744]: W0129 11:57:57.379825 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.379898 kubelet[2744]: E0129 11:57:57.379834 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.380065 kubelet[2744]: E0129 11:57:57.380050 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.380065 kubelet[2744]: W0129 11:57:57.380063 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.380131 kubelet[2744]: E0129 11:57:57.380072 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.380322 kubelet[2744]: E0129 11:57:57.380304 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.380371 kubelet[2744]: W0129 11:57:57.380325 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.380371 kubelet[2744]: E0129 11:57:57.380336 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.380531 kubelet[2744]: E0129 11:57:57.380512 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.380531 kubelet[2744]: W0129 11:57:57.380522 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.380531 kubelet[2744]: E0129 11:57:57.380531 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.380907 kubelet[2744]: E0129 11:57:57.380871 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.380907 kubelet[2744]: W0129 11:57:57.380891 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.380907 kubelet[2744]: E0129 11:57:57.380902 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.381116 kubelet[2744]: E0129 11:57:57.381104 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.381116 kubelet[2744]: W0129 11:57:57.381115 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.381194 kubelet[2744]: E0129 11:57:57.381126 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.381328 kubelet[2744]: E0129 11:57:57.381317 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.381328 kubelet[2744]: W0129 11:57:57.381327 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.381401 kubelet[2744]: E0129 11:57:57.381336 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.381525 kubelet[2744]: E0129 11:57:57.381512 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.381525 kubelet[2744]: W0129 11:57:57.381522 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.381586 kubelet[2744]: E0129 11:57:57.381529 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.381707 kubelet[2744]: E0129 11:57:57.381696 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.381743 kubelet[2744]: W0129 11:57:57.381706 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.381743 kubelet[2744]: E0129 11:57:57.381715 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.381906 kubelet[2744]: E0129 11:57:57.381884 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.381906 kubelet[2744]: W0129 11:57:57.381894 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.381906 kubelet[2744]: E0129 11:57:57.381903 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.382127 kubelet[2744]: E0129 11:57:57.382114 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.382171 kubelet[2744]: W0129 11:57:57.382133 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.382171 kubelet[2744]: E0129 11:57:57.382144 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.382330 kubelet[2744]: E0129 11:57:57.382320 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.382330 kubelet[2744]: W0129 11:57:57.382329 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.382403 kubelet[2744]: E0129 11:57:57.382336 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.382558 kubelet[2744]: E0129 11:57:57.382548 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.382558 kubelet[2744]: W0129 11:57:57.382556 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.382662 kubelet[2744]: E0129 11:57:57.382566 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.382781 kubelet[2744]: E0129 11:57:57.382768 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.382781 kubelet[2744]: W0129 11:57:57.382778 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.382819 kubelet[2744]: E0129 11:57:57.382786 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.382999 kubelet[2744]: E0129 11:57:57.382986 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.382999 kubelet[2744]: W0129 11:57:57.382996 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.383055 kubelet[2744]: E0129 11:57:57.383004 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.383202 kubelet[2744]: E0129 11:57:57.383191 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.383202 kubelet[2744]: W0129 11:57:57.383199 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.383266 kubelet[2744]: E0129 11:57:57.383206 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.383377 kubelet[2744]: E0129 11:57:57.383366 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.383377 kubelet[2744]: W0129 11:57:57.383375 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.383441 kubelet[2744]: E0129 11:57:57.383383 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.383814 kubelet[2744]: E0129 11:57:57.383549 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.383814 kubelet[2744]: W0129 11:57:57.383559 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.383814 kubelet[2744]: E0129 11:57:57.383567 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.383942 kubelet[2744]: E0129 11:57:57.383834 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.383942 kubelet[2744]: W0129 11:57:57.383841 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.383942 kubelet[2744]: E0129 11:57:57.383849 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.383942 kubelet[2744]: I0129 11:57:57.383874 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhmrx\" (UniqueName: \"kubernetes.io/projected/4b8569b7-17f3-41f5-af84-56efb8c2c37a-kube-api-access-jhmrx\") pod \"csi-node-driver-dmc9g\" (UID: \"4b8569b7-17f3-41f5-af84-56efb8c2c37a\") " pod="calico-system/csi-node-driver-dmc9g" Jan 29 11:57:57.384258 kubelet[2744]: E0129 11:57:57.384088 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.384258 kubelet[2744]: W0129 11:57:57.384110 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.384258 kubelet[2744]: E0129 11:57:57.384122 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.384258 kubelet[2744]: I0129 11:57:57.384141 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b8569b7-17f3-41f5-af84-56efb8c2c37a-kubelet-dir\") pod \"csi-node-driver-dmc9g\" (UID: \"4b8569b7-17f3-41f5-af84-56efb8c2c37a\") " pod="calico-system/csi-node-driver-dmc9g" Jan 29 11:57:57.386470 kubelet[2744]: E0129 11:57:57.384709 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.386470 kubelet[2744]: W0129 11:57:57.384725 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.386470 kubelet[2744]: E0129 11:57:57.384741 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.386470 kubelet[2744]: I0129 11:57:57.384758 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4b8569b7-17f3-41f5-af84-56efb8c2c37a-varrun\") pod \"csi-node-driver-dmc9g\" (UID: \"4b8569b7-17f3-41f5-af84-56efb8c2c37a\") " pod="calico-system/csi-node-driver-dmc9g" Jan 29 11:57:57.386470 kubelet[2744]: E0129 11:57:57.385925 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.386470 kubelet[2744]: W0129 11:57:57.385964 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.386470 kubelet[2744]: E0129 11:57:57.386081 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.386933 kubelet[2744]: E0129 11:57:57.386918 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.386933 kubelet[2744]: W0129 11:57:57.386932 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.387080 kubelet[2744]: E0129 11:57:57.386956 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.387226 kubelet[2744]: E0129 11:57:57.387206 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.387226 kubelet[2744]: W0129 11:57:57.387217 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.387276 kubelet[2744]: E0129 11:57:57.387230 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.387276 kubelet[2744]: I0129 11:57:57.387249 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4b8569b7-17f3-41f5-af84-56efb8c2c37a-socket-dir\") pod \"csi-node-driver-dmc9g\" (UID: \"4b8569b7-17f3-41f5-af84-56efb8c2c37a\") " pod="calico-system/csi-node-driver-dmc9g" Jan 29 11:57:57.387435 kubelet[2744]: E0129 11:57:57.387423 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.387435 kubelet[2744]: W0129 11:57:57.387434 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.387488 kubelet[2744]: E0129 11:57:57.387453 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.387692 kubelet[2744]: E0129 11:57:57.387681 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.387741 kubelet[2744]: W0129 11:57:57.387693 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.387741 kubelet[2744]: E0129 11:57:57.387701 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.387959 kubelet[2744]: E0129 11:57:57.387947 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.387959 kubelet[2744]: W0129 11:57:57.387957 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.388016 kubelet[2744]: E0129 11:57:57.387969 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.388172 kubelet[2744]: E0129 11:57:57.388160 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.388172 kubelet[2744]: W0129 11:57:57.388171 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.388230 kubelet[2744]: E0129 11:57:57.388179 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.388405 kubelet[2744]: E0129 11:57:57.388394 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.388405 kubelet[2744]: W0129 11:57:57.388403 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.388467 kubelet[2744]: E0129 11:57:57.388411 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.388636 kubelet[2744]: E0129 11:57:57.388589 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.388682 kubelet[2744]: W0129 11:57:57.388667 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.388682 kubelet[2744]: E0129 11:57:57.388677 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.388906 kubelet[2744]: E0129 11:57:57.388893 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.388906 kubelet[2744]: W0129 11:57:57.388905 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.388963 kubelet[2744]: E0129 11:57:57.388915 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.388963 kubelet[2744]: I0129 11:57:57.388932 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4b8569b7-17f3-41f5-af84-56efb8c2c37a-registration-dir\") pod \"csi-node-driver-dmc9g\" (UID: \"4b8569b7-17f3-41f5-af84-56efb8c2c37a\") " pod="calico-system/csi-node-driver-dmc9g" Jan 29 11:57:57.389139 kubelet[2744]: E0129 11:57:57.389127 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.389139 kubelet[2744]: W0129 11:57:57.389137 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.389206 kubelet[2744]: E0129 11:57:57.389145 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.389318 kubelet[2744]: E0129 11:57:57.389308 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.389318 kubelet[2744]: W0129 11:57:57.389316 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.389367 kubelet[2744]: E0129 11:57:57.389323 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.405938 containerd[1579]: time="2025-01-29T11:57:57.405849134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:57.406073 containerd[1579]: time="2025-01-29T11:57:57.405905279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:57.406073 containerd[1579]: time="2025-01-29T11:57:57.405918514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:57.406073 containerd[1579]: time="2025-01-29T11:57:57.406013924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:57.437618 kubelet[2744]: E0129 11:57:57.437565 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:57.438716 containerd[1579]: time="2025-01-29T11:57:57.438017414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-55n9x,Uid:44c639fd-53e4-475a-a83b-47a1d4d75ff1,Namespace:calico-system,Attempt:0,}" Jan 29 11:57:57.468641 containerd[1579]: time="2025-01-29T11:57:57.468587397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55488b7964-gsdtk,Uid:e8299335-919e-4583-b73d-07cf47fb238c,Namespace:calico-system,Attempt:0,} returns sandbox id \"57a11aee499c190f8c828b482ab381c6249bd49e470ab317ff8abe68b7e64423\"" Jan 29 11:57:57.469565 kubelet[2744]: E0129 11:57:57.469537 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:57.470867 containerd[1579]: time="2025-01-29T11:57:57.470825278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:57:57.490483 kubelet[2744]: E0129 11:57:57.490439 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.490929 kubelet[2744]: W0129 11:57:57.490574 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.490929 kubelet[2744]: E0129 11:57:57.490719 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.491294 kubelet[2744]: E0129 11:57:57.491225 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.491294 kubelet[2744]: W0129 11:57:57.491239 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.491294 kubelet[2744]: E0129 11:57:57.491261 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.491923 kubelet[2744]: E0129 11:57:57.491822 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.491923 kubelet[2744]: W0129 11:57:57.491879 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.491923 kubelet[2744]: E0129 11:57:57.491901 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.493215 kubelet[2744]: E0129 11:57:57.493180 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.493291 kubelet[2744]: W0129 11:57:57.493214 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.493291 kubelet[2744]: E0129 11:57:57.493248 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.493684 kubelet[2744]: E0129 11:57:57.493661 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.493684 kubelet[2744]: W0129 11:57:57.493676 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.493919 kubelet[2744]: E0129 11:57:57.493798 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.493964 kubelet[2744]: E0129 11:57:57.493922 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.493964 kubelet[2744]: W0129 11:57:57.493933 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.494064 kubelet[2744]: E0129 11:57:57.494035 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.494330 kubelet[2744]: E0129 11:57:57.494178 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.494330 kubelet[2744]: W0129 11:57:57.494195 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.494330 kubelet[2744]: E0129 11:57:57.494296 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.494429 kubelet[2744]: E0129 11:57:57.494416 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.494429 kubelet[2744]: W0129 11:57:57.494426 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.494571 kubelet[2744]: E0129 11:57:57.494545 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.494901 kubelet[2744]: E0129 11:57:57.494828 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.494901 kubelet[2744]: W0129 11:57:57.494842 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.494901 kubelet[2744]: E0129 11:57:57.494857 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.495221 kubelet[2744]: E0129 11:57:57.495181 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.495221 kubelet[2744]: W0129 11:57:57.495201 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.495678 kubelet[2744]: E0129 11:57:57.495413 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.495678 kubelet[2744]: W0129 11:57:57.495425 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.495765 kubelet[2744]: E0129 11:57:57.495687 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.495765 kubelet[2744]: E0129 11:57:57.495714 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.495765 kubelet[2744]: E0129 11:57:57.495764 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.495858 kubelet[2744]: W0129 11:57:57.495775 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.495961 kubelet[2744]: E0129 11:57:57.495938 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.496320 kubelet[2744]: E0129 11:57:57.495991 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.496320 kubelet[2744]: W0129 11:57:57.496197 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.496320 kubelet[2744]: E0129 11:57:57.496242 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.496460 kubelet[2744]: E0129 11:57:57.496437 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.496460 kubelet[2744]: W0129 11:57:57.496448 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.496747 kubelet[2744]: E0129 11:57:57.496560 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.496747 kubelet[2744]: E0129 11:57:57.496695 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.496747 kubelet[2744]: W0129 11:57:57.496704 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.496747 kubelet[2744]: E0129 11:57:57.496720 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.497059 kubelet[2744]: E0129 11:57:57.496982 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.497059 kubelet[2744]: W0129 11:57:57.496992 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.497059 kubelet[2744]: E0129 11:57:57.497003 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.497307 kubelet[2744]: E0129 11:57:57.497268 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.497307 kubelet[2744]: W0129 11:57:57.497280 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.497307 kubelet[2744]: E0129 11:57:57.497306 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.497942 kubelet[2744]: E0129 11:57:57.497826 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.497942 kubelet[2744]: W0129 11:57:57.497839 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.497942 kubelet[2744]: E0129 11:57:57.497864 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.498634 kubelet[2744]: E0129 11:57:57.498274 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.498634 kubelet[2744]: W0129 11:57:57.498287 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.498634 kubelet[2744]: E0129 11:57:57.498300 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.498634 kubelet[2744]: E0129 11:57:57.498558 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.498634 kubelet[2744]: W0129 11:57:57.498566 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.499121 kubelet[2744]: E0129 11:57:57.499089 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.499518 kubelet[2744]: E0129 11:57:57.499499 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.499518 kubelet[2744]: W0129 11:57:57.499514 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.499660 kubelet[2744]: E0129 11:57:57.499641 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.500375 kubelet[2744]: E0129 11:57:57.500352 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.500375 kubelet[2744]: W0129 11:57:57.500369 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.500465 kubelet[2744]: E0129 11:57:57.500385 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.500826 kubelet[2744]: E0129 11:57:57.500806 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.500987 kubelet[2744]: W0129 11:57:57.500960 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.500987 kubelet[2744]: E0129 11:57:57.500980 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.501420 kubelet[2744]: E0129 11:57:57.501389 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.501420 kubelet[2744]: W0129 11:57:57.501404 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.501420 kubelet[2744]: E0129 11:57:57.501415 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.501883 kubelet[2744]: E0129 11:57:57.501788 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.501883 kubelet[2744]: W0129 11:57:57.501801 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.501883 kubelet[2744]: E0129 11:57:57.501813 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.506785 kubelet[2744]: E0129 11:57:57.506744 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:57:57.506785 kubelet[2744]: W0129 11:57:57.506781 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:57:57.506943 kubelet[2744]: E0129 11:57:57.506806 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:57:57.508154 containerd[1579]: time="2025-01-29T11:57:57.507254527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:57:57.508154 containerd[1579]: time="2025-01-29T11:57:57.508142218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:57:57.508286 containerd[1579]: time="2025-01-29T11:57:57.508161605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:57.508398 containerd[1579]: time="2025-01-29T11:57:57.508349217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:57:57.556623 containerd[1579]: time="2025-01-29T11:57:57.556521407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-55n9x,Uid:44c639fd-53e4-475a-a83b-47a1d4d75ff1,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9fdf0efac635d28c7fa3ec362e1b46893f6c9a192fb4d4b138d560eb136e475\"" Jan 29 11:57:57.558279 kubelet[2744]: E0129 11:57:57.558026 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:57:58.848572 kubelet[2744]: E0129 11:57:58.848512 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dmc9g" podUID="4b8569b7-17f3-41f5-af84-56efb8c2c37a" Jan 29 11:57:58.880392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531888368.mount: Deactivated successfully. Jan 29 11:57:59.267305 containerd[1579]: time="2025-01-29T11:57:59.267244054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:59.268467 containerd[1579]: time="2025-01-29T11:57:59.268407492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 11:57:59.271099 containerd[1579]: time="2025-01-29T11:57:59.270411383Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:59.273051 containerd[1579]: time="2025-01-29T11:57:59.273013598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:57:59.273744 containerd[1579]: time="2025-01-29T11:57:59.273705659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 1.802832261s" Jan 29 11:57:59.273807 containerd[1579]: time="2025-01-29T11:57:59.273748339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:57:59.281928 containerd[1579]: time="2025-01-29T11:57:59.281879475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:57:59.311230 containerd[1579]: time="2025-01-29T11:57:59.311144397Z" level=info msg="CreateContainer within sandbox \"57a11aee499c190f8c828b482ab381c6249bd49e470ab317ff8abe68b7e64423\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:57:59.524770 containerd[1579]: time="2025-01-29T11:57:59.524647847Z" level=info msg="CreateContainer within sandbox \"57a11aee499c190f8c828b482ab381c6249bd49e470ab317ff8abe68b7e64423\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"24cb40cf02b24cc4918399d5e425cfd73624c66dd7c6a9b31a0dc3d4f8d383c0\"" Jan 29 11:57:59.525046 containerd[1579]: time="2025-01-29T11:57:59.525023694Z" level=info msg="StartContainer for \"24cb40cf02b24cc4918399d5e425cfd73624c66dd7c6a9b31a0dc3d4f8d383c0\"" Jan 29 11:58:00.036102 containerd[1579]: time="2025-01-29T11:58:00.036031758Z" level=info msg="StartContainer for \"24cb40cf02b24cc4918399d5e425cfd73624c66dd7c6a9b31a0dc3d4f8d383c0\" returns successfully" Jan 29 11:58:00.038460 kubelet[2744]: E0129 11:58:00.038416 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dmc9g" podUID="4b8569b7-17f3-41f5-af84-56efb8c2c37a" Jan 29 11:58:01.040376 kubelet[2744]: E0129 11:58:01.040335 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:01.049576 containerd[1579]: time="2025-01-29T11:58:01.049528803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:01.050617 containerd[1579]: time="2025-01-29T11:58:01.050544983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 11:58:01.051978 containerd[1579]: time="2025-01-29T11:58:01.051924457Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:01.054228 kubelet[2744]: I0129 11:58:01.054168 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55488b7964-gsdtk" podStartSLOduration=2.243019 podStartE2EDuration="4.054151786s" podCreationTimestamp="2025-01-29 11:57:57 +0000 UTC" firstStartedPulling="2025-01-29 11:57:57.470500757 +0000 UTC m=+21.800148040" lastFinishedPulling="2025-01-29 11:57:59.281633533 +0000 UTC m=+23.611280826" observedRunningTime="2025-01-29 11:58:01.05390931 +0000 UTC m=+25.383556593" watchObservedRunningTime="2025-01-29 11:58:01.054151786 +0000 UTC m=+25.383799069" Jan 29 11:58:01.055443 containerd[1579]: time="2025-01-29T11:58:01.055413889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:01.056021 containerd[1579]: time="2025-01-29T11:58:01.055989271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.774063138s" Jan 29 11:58:01.056021 containerd[1579]: time="2025-01-29T11:58:01.056017965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:58:01.058119 containerd[1579]: time="2025-01-29T11:58:01.058084080Z" level=info msg="CreateContainer within sandbox \"e9fdf0efac635d28c7fa3ec362e1b46893f6c9a192fb4d4b138d560eb136e475\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:58:01.075819 containerd[1579]: time="2025-01-29T11:58:01.075771449Z" level=info msg="CreateContainer within sandbox \"e9fdf0efac635d28c7fa3ec362e1b46893f6c9a192fb4d4b138d560eb136e475\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dfc61ebe0e9af770cb9b08174f49f04f0029488eba98c3c0d4eb393ae0c9b317\"" Jan 29 11:58:01.076477 containerd[1579]: time="2025-01-29T11:58:01.076312016Z" level=info msg="StartContainer for \"dfc61ebe0e9af770cb9b08174f49f04f0029488eba98c3c0d4eb393ae0c9b317\"" Jan 29 11:58:01.107833 kubelet[2744]: E0129 11:58:01.107801 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.107833 kubelet[2744]: W0129 11:58:01.107824 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.107833 kubelet[2744]: E0129 11:58:01.107845 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.108075 kubelet[2744]: E0129 11:58:01.108062 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.108075 kubelet[2744]: W0129 11:58:01.108072 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.108134 kubelet[2744]: E0129 11:58:01.108080 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.108279 kubelet[2744]: E0129 11:58:01.108259 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.108279 kubelet[2744]: W0129 11:58:01.108270 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.108279 kubelet[2744]: E0129 11:58:01.108277 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.108470 kubelet[2744]: E0129 11:58:01.108451 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.108470 kubelet[2744]: W0129 11:58:01.108463 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.108470 kubelet[2744]: E0129 11:58:01.108471 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.108759 kubelet[2744]: E0129 11:58:01.108724 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.108759 kubelet[2744]: W0129 11:58:01.108748 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.108759 kubelet[2744]: E0129 11:58:01.108774 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.108990 kubelet[2744]: E0129 11:58:01.108976 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.108990 kubelet[2744]: W0129 11:58:01.108987 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.109081 kubelet[2744]: E0129 11:58:01.108996 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.109330 kubelet[2744]: E0129 11:58:01.109316 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.109330 kubelet[2744]: W0129 11:58:01.109328 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.109406 kubelet[2744]: E0129 11:58:01.109338 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.109712 kubelet[2744]: E0129 11:58:01.109694 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.109712 kubelet[2744]: W0129 11:58:01.109705 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.109712 kubelet[2744]: E0129 11:58:01.109714 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.110563 kubelet[2744]: E0129 11:58:01.110082 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.110563 kubelet[2744]: W0129 11:58:01.110094 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.110563 kubelet[2744]: E0129 11:58:01.110105 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.110563 kubelet[2744]: E0129 11:58:01.110337 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.110563 kubelet[2744]: W0129 11:58:01.110345 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.110563 kubelet[2744]: E0129 11:58:01.110353 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.110563 kubelet[2744]: E0129 11:58:01.110556 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.110563 kubelet[2744]: W0129 11:58:01.110564 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.110819 kubelet[2744]: E0129 11:58:01.110573 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.110940 kubelet[2744]: E0129 11:58:01.110913 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.110940 kubelet[2744]: W0129 11:58:01.110924 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.110940 kubelet[2744]: E0129 11:58:01.110933 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.111855 kubelet[2744]: E0129 11:58:01.111843 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.112150 kubelet[2744]: W0129 11:58:01.112079 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.112150 kubelet[2744]: E0129 11:58:01.112096 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.112524 kubelet[2744]: E0129 11:58:01.112415 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.112524 kubelet[2744]: W0129 11:58:01.112425 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.112524 kubelet[2744]: E0129 11:58:01.112434 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.112778 kubelet[2744]: E0129 11:58:01.112718 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.112828 kubelet[2744]: W0129 11:58:01.112782 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.112828 kubelet[2744]: E0129 11:58:01.112806 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.120252 kubelet[2744]: E0129 11:58:01.120137 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.120252 kubelet[2744]: W0129 11:58:01.120148 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.120252 kubelet[2744]: E0129 11:58:01.120157 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.120407 kubelet[2744]: E0129 11:58:01.120385 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.120407 kubelet[2744]: W0129 11:58:01.120403 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.120476 kubelet[2744]: E0129 11:58:01.120424 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.120747 kubelet[2744]: E0129 11:58:01.120731 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.120747 kubelet[2744]: W0129 11:58:01.120744 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.120817 kubelet[2744]: E0129 11:58:01.120762 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.121051 kubelet[2744]: E0129 11:58:01.121019 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.121051 kubelet[2744]: W0129 11:58:01.121040 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.121134 kubelet[2744]: E0129 11:58:01.121056 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.121283 kubelet[2744]: E0129 11:58:01.121265 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.121283 kubelet[2744]: W0129 11:58:01.121275 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.121340 kubelet[2744]: E0129 11:58:01.121288 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.121525 kubelet[2744]: E0129 11:58:01.121503 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.121525 kubelet[2744]: W0129 11:58:01.121513 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.121588 kubelet[2744]: E0129 11:58:01.121545 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.121750 kubelet[2744]: E0129 11:58:01.121736 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.121750 kubelet[2744]: W0129 11:58:01.121746 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.121810 kubelet[2744]: E0129 11:58:01.121775 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.121969 kubelet[2744]: E0129 11:58:01.121955 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.121969 kubelet[2744]: W0129 11:58:01.121966 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.122045 kubelet[2744]: E0129 11:58:01.121995 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.122211 kubelet[2744]: E0129 11:58:01.122197 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.122211 kubelet[2744]: W0129 11:58:01.122208 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.122270 kubelet[2744]: E0129 11:58:01.122221 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.122557 kubelet[2744]: E0129 11:58:01.122540 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.122557 kubelet[2744]: W0129 11:58:01.122555 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.122642 kubelet[2744]: E0129 11:58:01.122572 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.122796 kubelet[2744]: E0129 11:58:01.122780 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.122796 kubelet[2744]: W0129 11:58:01.122795 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.122854 kubelet[2744]: E0129 11:58:01.122812 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.123064 kubelet[2744]: E0129 11:58:01.123049 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.123064 kubelet[2744]: W0129 11:58:01.123061 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.123134 kubelet[2744]: E0129 11:58:01.123075 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.123350 kubelet[2744]: E0129 11:58:01.123334 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.123350 kubelet[2744]: W0129 11:58:01.123346 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.123416 kubelet[2744]: E0129 11:58:01.123361 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.123543 kubelet[2744]: E0129 11:58:01.123529 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.123543 kubelet[2744]: W0129 11:58:01.123540 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.123621 kubelet[2744]: E0129 11:58:01.123552 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.123794 kubelet[2744]: E0129 11:58:01.123778 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.123794 kubelet[2744]: W0129 11:58:01.123792 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.123851 kubelet[2744]: E0129 11:58:01.123806 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.124007 kubelet[2744]: E0129 11:58:01.123994 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.124007 kubelet[2744]: W0129 11:58:01.124005 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.124083 kubelet[2744]: E0129 11:58:01.124018 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.124240 kubelet[2744]: E0129 11:58:01.124224 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.124240 kubelet[2744]: W0129 11:58:01.124238 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.124297 kubelet[2744]: E0129 11:58:01.124247 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.124798 kubelet[2744]: E0129 11:58:01.124782 2744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:58:01.124798 kubelet[2744]: W0129 11:58:01.124796 2744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:58:01.124872 kubelet[2744]: E0129 11:58:01.124807 2744 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:58:01.142437 containerd[1579]: time="2025-01-29T11:58:01.142394246Z" level=info msg="StartContainer for \"dfc61ebe0e9af770cb9b08174f49f04f0029488eba98c3c0d4eb393ae0c9b317\" returns successfully" Jan 29 11:58:01.210299 containerd[1579]: time="2025-01-29T11:58:01.208656966Z" level=info msg="shim disconnected" id=dfc61ebe0e9af770cb9b08174f49f04f0029488eba98c3c0d4eb393ae0c9b317 namespace=k8s.io Jan 29 11:58:01.210299 containerd[1579]: time="2025-01-29T11:58:01.210294085Z" level=warning msg="cleaning up after shim disconnected" id=dfc61ebe0e9af770cb9b08174f49f04f0029488eba98c3c0d4eb393ae0c9b317 namespace=k8s.io Jan 29 11:58:01.210299 containerd[1579]: time="2025-01-29T11:58:01.210305867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:58:01.300485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfc61ebe0e9af770cb9b08174f49f04f0029488eba98c3c0d4eb393ae0c9b317-rootfs.mount: Deactivated successfully. Jan 29 11:58:01.773299 kubelet[2744]: E0129 11:58:01.773249 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dmc9g" podUID="4b8569b7-17f3-41f5-af84-56efb8c2c37a" Jan 29 11:58:02.043982 kubelet[2744]: E0129 11:58:02.043856 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:02.044647 containerd[1579]: time="2025-01-29T11:58:02.044433575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:58:03.774234 kubelet[2744]: E0129 11:58:03.774168 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dmc9g" podUID="4b8569b7-17f3-41f5-af84-56efb8c2c37a" Jan 29 11:58:05.337777 containerd[1579]: time="2025-01-29T11:58:05.337720766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:05.338528 containerd[1579]: time="2025-01-29T11:58:05.338476306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:58:05.340012 containerd[1579]: time="2025-01-29T11:58:05.339979781Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:05.342695 containerd[1579]: time="2025-01-29T11:58:05.342654137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:05.343438 containerd[1579]: time="2025-01-29T11:58:05.343407613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.298930847s" Jan 29 11:58:05.343438 containerd[1579]: time="2025-01-29T11:58:05.343436337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:58:05.345925 containerd[1579]: time="2025-01-29T11:58:05.345867686Z" level=info msg="CreateContainer within sandbox \"e9fdf0efac635d28c7fa3ec362e1b46893f6c9a192fb4d4b138d560eb136e475\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:58:05.363870 containerd[1579]: time="2025-01-29T11:58:05.363801917Z" level=info msg="CreateContainer within sandbox \"e9fdf0efac635d28c7fa3ec362e1b46893f6c9a192fb4d4b138d560eb136e475\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"664b19ac1ce8c04c891306d7b80c2ca764972b1bb459769993db82312ad1595b\"" Jan 29 11:58:05.364491 containerd[1579]: time="2025-01-29T11:58:05.364464312Z" level=info msg="StartContainer for \"664b19ac1ce8c04c891306d7b80c2ca764972b1bb459769993db82312ad1595b\"" Jan 29 11:58:05.428595 containerd[1579]: time="2025-01-29T11:58:05.428539650Z" level=info msg="StartContainer for \"664b19ac1ce8c04c891306d7b80c2ca764972b1bb459769993db82312ad1595b\" returns successfully" Jan 29 11:58:05.773716 kubelet[2744]: E0129 11:58:05.773635 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dmc9g" podUID="4b8569b7-17f3-41f5-af84-56efb8c2c37a" Jan 29 11:58:06.052926 kubelet[2744]: E0129 11:58:06.052794 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:07.150695 kubelet[2744]: E0129 11:58:07.150652 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:07.759088 systemd-resolved[1455]: Under memory pressure, flushing caches. Jan 29 11:58:07.759137 systemd-resolved[1455]: Flushed all caches. Jan 29 11:58:07.768635 systemd-journald[1154]: Under memory pressure, flushing caches. Jan 29 11:58:07.773177 kubelet[2744]: E0129 11:58:07.773121 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dmc9g" podUID="4b8569b7-17f3-41f5-af84-56efb8c2c37a" Jan 29 11:58:07.974143 containerd[1579]: time="2025-01-29T11:58:07.973446361Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:58:08.002318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-664b19ac1ce8c04c891306d7b80c2ca764972b1bb459769993db82312ad1595b-rootfs.mount: Deactivated successfully. Jan 29 11:58:08.004222 containerd[1579]: time="2025-01-29T11:58:08.004162165Z" level=info msg="shim disconnected" id=664b19ac1ce8c04c891306d7b80c2ca764972b1bb459769993db82312ad1595b namespace=k8s.io Jan 29 11:58:08.004222 containerd[1579]: time="2025-01-29T11:58:08.004219733Z" level=warning msg="cleaning up after shim disconnected" id=664b19ac1ce8c04c891306d7b80c2ca764972b1bb459769993db82312ad1595b namespace=k8s.io Jan 29 11:58:08.004358 containerd[1579]: time="2025-01-29T11:58:08.004230142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:58:08.006386 kubelet[2744]: I0129 11:58:08.006350 2744 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:58:08.029985 kubelet[2744]: I0129 11:58:08.029652 2744 topology_manager.go:215] "Topology Admit Handler" podUID="562f7dc1-fccc-4836-832d-33f596ec71b8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-29qqz" Jan 29 11:58:08.032551 kubelet[2744]: I0129 11:58:08.031976 2744 topology_manager.go:215] "Topology Admit Handler" podUID="442c7a1a-1bf4-4799-9255-bae8a191ac48" podNamespace="calico-apiserver" podName="calico-apiserver-77c5f74b87-vk8s6" Jan 29 11:58:08.037715 kubelet[2744]: I0129 11:58:08.037654 2744 topology_manager.go:215] "Topology Admit Handler" podUID="2baaae63-eb27-4f62-99d9-91a996a907b5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sp9qb" Jan 29 11:58:08.037873 kubelet[2744]: I0129 11:58:08.037861 2744 topology_manager.go:215] "Topology Admit Handler" podUID="26202b2c-e1f1-4083-9026-183a5e92161f" podNamespace="calico-apiserver" podName="calico-apiserver-77c5f74b87-wnvvw" Jan 29 11:58:08.037982 kubelet[2744]: I0129 11:58:08.037955 2744 topology_manager.go:215] "Topology Admit Handler" podUID="eaeb6ba2-4292-4dd6-9e36-a5452f96f08f" podNamespace="calico-system" podName="calico-kube-controllers-7c7dbf75df-clqxh" Jan 29 11:58:08.128118 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:49832.service - OpenSSH per-connection server daemon (10.0.0.1:49832). Jan 29 11:58:08.142629 kubelet[2744]: I0129 11:58:08.142577 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pp6h\" (UniqueName: \"kubernetes.io/projected/562f7dc1-fccc-4836-832d-33f596ec71b8-kube-api-access-8pp6h\") pod \"coredns-7db6d8ff4d-29qqz\" (UID: \"562f7dc1-fccc-4836-832d-33f596ec71b8\") " pod="kube-system/coredns-7db6d8ff4d-29qqz" Jan 29 11:58:08.142629 kubelet[2744]: I0129 11:58:08.142632 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2baaae63-eb27-4f62-99d9-91a996a907b5-config-volume\") pod \"coredns-7db6d8ff4d-sp9qb\" (UID: \"2baaae63-eb27-4f62-99d9-91a996a907b5\") " pod="kube-system/coredns-7db6d8ff4d-sp9qb" Jan 29 11:58:08.142956 kubelet[2744]: I0129 11:58:08.142651 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/442c7a1a-1bf4-4799-9255-bae8a191ac48-calico-apiserver-certs\") pod \"calico-apiserver-77c5f74b87-vk8s6\" (UID: \"442c7a1a-1bf4-4799-9255-bae8a191ac48\") " pod="calico-apiserver/calico-apiserver-77c5f74b87-vk8s6" Jan 29 11:58:08.142956 kubelet[2744]: I0129 11:58:08.142672 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaeb6ba2-4292-4dd6-9e36-a5452f96f08f-tigera-ca-bundle\") pod \"calico-kube-controllers-7c7dbf75df-clqxh\" (UID: \"eaeb6ba2-4292-4dd6-9e36-a5452f96f08f\") " pod="calico-system/calico-kube-controllers-7c7dbf75df-clqxh" Jan 29 11:58:08.142956 kubelet[2744]: I0129 11:58:08.142689 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg7bq\" (UniqueName: \"kubernetes.io/projected/442c7a1a-1bf4-4799-9255-bae8a191ac48-kube-api-access-xg7bq\") pod \"calico-apiserver-77c5f74b87-vk8s6\" (UID: \"442c7a1a-1bf4-4799-9255-bae8a191ac48\") " pod="calico-apiserver/calico-apiserver-77c5f74b87-vk8s6" Jan 29 11:58:08.142956 kubelet[2744]: I0129 11:58:08.142706 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/562f7dc1-fccc-4836-832d-33f596ec71b8-config-volume\") pod \"coredns-7db6d8ff4d-29qqz\" (UID: \"562f7dc1-fccc-4836-832d-33f596ec71b8\") " pod="kube-system/coredns-7db6d8ff4d-29qqz" Jan 29 11:58:08.142956 kubelet[2744]: I0129 11:58:08.142723 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8l7s\" (UniqueName: \"kubernetes.io/projected/2baaae63-eb27-4f62-99d9-91a996a907b5-kube-api-access-z8l7s\") pod \"coredns-7db6d8ff4d-sp9qb\" (UID: \"2baaae63-eb27-4f62-99d9-91a996a907b5\") " pod="kube-system/coredns-7db6d8ff4d-sp9qb" Jan 29 11:58:08.143120 kubelet[2744]: I0129 11:58:08.142741 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqs8d\" (UniqueName: \"kubernetes.io/projected/26202b2c-e1f1-4083-9026-183a5e92161f-kube-api-access-fqs8d\") pod \"calico-apiserver-77c5f74b87-wnvvw\" (UID: \"26202b2c-e1f1-4083-9026-183a5e92161f\") " pod="calico-apiserver/calico-apiserver-77c5f74b87-wnvvw" Jan 29 11:58:08.143120 kubelet[2744]: I0129 11:58:08.142762 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5j4\" (UniqueName: \"kubernetes.io/projected/eaeb6ba2-4292-4dd6-9e36-a5452f96f08f-kube-api-access-qc5j4\") pod \"calico-kube-controllers-7c7dbf75df-clqxh\" (UID: \"eaeb6ba2-4292-4dd6-9e36-a5452f96f08f\") " pod="calico-system/calico-kube-controllers-7c7dbf75df-clqxh" Jan 29 11:58:08.143120 kubelet[2744]: I0129 11:58:08.142811 2744 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/26202b2c-e1f1-4083-9026-183a5e92161f-calico-apiserver-certs\") pod \"calico-apiserver-77c5f74b87-wnvvw\" (UID: \"26202b2c-e1f1-4083-9026-183a5e92161f\") " pod="calico-apiserver/calico-apiserver-77c5f74b87-wnvvw" Jan 29 11:58:08.153945 kubelet[2744]: E0129 11:58:08.153911 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:08.154987 containerd[1579]: time="2025-01-29T11:58:08.154946463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:58:08.163629 sshd[3535]: Accepted publickey for core from 10.0.0.1 port 49832 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:08.165405 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:08.170861 systemd-logind[1565]: New session 8 of user core. Jan 29 11:58:08.181891 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:58:08.306380 sshd[3535]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:08.310627 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:49832.service: Deactivated successfully. Jan 29 11:58:08.312933 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:58:08.313013 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:58:08.314078 systemd-logind[1565]: Removed session 8. Jan 29 11:58:08.334873 kubelet[2744]: E0129 11:58:08.334842 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:08.335487 containerd[1579]: time="2025-01-29T11:58:08.335445479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-29qqz,Uid:562f7dc1-fccc-4836-832d-33f596ec71b8,Namespace:kube-system,Attempt:0,}" Jan 29 11:58:08.336948 containerd[1579]: time="2025-01-29T11:58:08.336905942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5f74b87-vk8s6,Uid:442c7a1a-1bf4-4799-9255-bae8a191ac48,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:58:08.344538 containerd[1579]: time="2025-01-29T11:58:08.344208149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5f74b87-wnvvw,Uid:26202b2c-e1f1-4083-9026-183a5e92161f,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:58:08.344538 containerd[1579]: time="2025-01-29T11:58:08.344309109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7dbf75df-clqxh,Uid:eaeb6ba2-4292-4dd6-9e36-a5452f96f08f,Namespace:calico-system,Attempt:0,}" Jan 29 11:58:08.344715 kubelet[2744]: E0129 11:58:08.344517 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:08.344932 containerd[1579]: time="2025-01-29T11:58:08.344862108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sp9qb,Uid:2baaae63-eb27-4f62-99d9-91a996a907b5,Namespace:kube-system,Attempt:0,}" Jan 29 11:58:08.470477 containerd[1579]: time="2025-01-29T11:58:08.470406535Z" level=error msg="Failed to destroy network for sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.471397 containerd[1579]: time="2025-01-29T11:58:08.471300594Z" level=error msg="encountered an error cleaning up failed sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.471397 containerd[1579]: time="2025-01-29T11:58:08.471369073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5f74b87-vk8s6,Uid:442c7a1a-1bf4-4799-9255-bae8a191ac48,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.472228 kubelet[2744]: E0129 11:58:08.471790 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.472228 kubelet[2744]: E0129 11:58:08.471874 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c5f74b87-vk8s6" Jan 29 11:58:08.472228 kubelet[2744]: E0129 11:58:08.471908 2744 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c5f74b87-vk8s6" Jan 29 11:58:08.472392 kubelet[2744]: E0129 11:58:08.471967 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c5f74b87-vk8s6_calico-apiserver(442c7a1a-1bf4-4799-9255-bae8a191ac48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c5f74b87-vk8s6_calico-apiserver(442c7a1a-1bf4-4799-9255-bae8a191ac48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c5f74b87-vk8s6" podUID="442c7a1a-1bf4-4799-9255-bae8a191ac48" Jan 29 11:58:08.477457 containerd[1579]: time="2025-01-29T11:58:08.477397908Z" level=error msg="Failed to destroy network for sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.478375 containerd[1579]: time="2025-01-29T11:58:08.478335479Z" level=error msg="encountered an error cleaning up failed sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.478421 containerd[1579]: time="2025-01-29T11:58:08.478402454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-29qqz,Uid:562f7dc1-fccc-4836-832d-33f596ec71b8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.478647 containerd[1579]: time="2025-01-29T11:58:08.478591319Z" level=error msg="Failed to destroy network for sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.478797 kubelet[2744]: E0129 11:58:08.478723 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.478797 kubelet[2744]: E0129 11:58:08.478767 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-29qqz" Jan 29 11:58:08.478797 kubelet[2744]: E0129 11:58:08.478785 2744 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-29qqz" Jan 29 11:58:08.478945 kubelet[2744]: E0129 11:58:08.478823 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-29qqz_kube-system(562f7dc1-fccc-4836-832d-33f596ec71b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-29qqz_kube-system(562f7dc1-fccc-4836-832d-33f596ec71b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-29qqz" podUID="562f7dc1-fccc-4836-832d-33f596ec71b8" Jan 29 11:58:08.479136 containerd[1579]: time="2025-01-29T11:58:08.479087502Z" level=error msg="Failed to destroy network for sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.479956 containerd[1579]: time="2025-01-29T11:58:08.479492382Z" level=error msg="encountered an error cleaning up failed sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.479956 containerd[1579]: time="2025-01-29T11:58:08.479537997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7dbf75df-clqxh,Uid:eaeb6ba2-4292-4dd6-9e36-a5452f96f08f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.479956 containerd[1579]: time="2025-01-29T11:58:08.479587710Z" level=error msg="encountered an error cleaning up failed sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.479956 containerd[1579]: time="2025-01-29T11:58:08.479663554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5f74b87-wnvvw,Uid:26202b2c-e1f1-4083-9026-183a5e92161f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.480172 kubelet[2744]: E0129 11:58:08.479745 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.480172 kubelet[2744]: E0129 11:58:08.479817 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c7dbf75df-clqxh" Jan 29 11:58:08.480172 kubelet[2744]: E0129 11:58:08.479850 2744 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c7dbf75df-clqxh" Jan 29 11:58:08.480285 kubelet[2744]: E0129 11:58:08.479905 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c7dbf75df-clqxh_calico-system(eaeb6ba2-4292-4dd6-9e36-a5452f96f08f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c7dbf75df-clqxh_calico-system(eaeb6ba2-4292-4dd6-9e36-a5452f96f08f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c7dbf75df-clqxh" podUID="eaeb6ba2-4292-4dd6-9e36-a5452f96f08f" Jan 29 11:58:08.480505 kubelet[2744]: E0129 11:58:08.480475 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.480578 kubelet[2744]: E0129 11:58:08.480513 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c5f74b87-wnvvw" Jan 29 11:58:08.480578 kubelet[2744]: E0129 11:58:08.480534 2744 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77c5f74b87-wnvvw" Jan 29 11:58:08.480684 kubelet[2744]: E0129 11:58:08.480572 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77c5f74b87-wnvvw_calico-apiserver(26202b2c-e1f1-4083-9026-183a5e92161f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77c5f74b87-wnvvw_calico-apiserver(26202b2c-e1f1-4083-9026-183a5e92161f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c5f74b87-wnvvw" podUID="26202b2c-e1f1-4083-9026-183a5e92161f" Jan 29 11:58:08.486805 containerd[1579]: time="2025-01-29T11:58:08.486747280Z" level=error msg="Failed to destroy network for sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.487229 containerd[1579]: time="2025-01-29T11:58:08.487206012Z" level=error msg="encountered an error cleaning up failed sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.487277 containerd[1579]: time="2025-01-29T11:58:08.487254843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sp9qb,Uid:2baaae63-eb27-4f62-99d9-91a996a907b5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.487513 kubelet[2744]: E0129 11:58:08.487472 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:08.487597 kubelet[2744]: E0129 11:58:08.487539 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-sp9qb" Jan 29 11:58:08.487597 kubelet[2744]: E0129 11:58:08.487565 2744 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-sp9qb" Jan 29 11:58:08.487702 kubelet[2744]: E0129 11:58:08.487642 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-sp9qb_kube-system(2baaae63-eb27-4f62-99d9-91a996a907b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-sp9qb_kube-system(2baaae63-eb27-4f62-99d9-91a996a907b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-sp9qb" podUID="2baaae63-eb27-4f62-99d9-91a996a907b5" Jan 29 11:58:09.004688 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf-shm.mount: Deactivated successfully. Jan 29 11:58:09.156616 kubelet[2744]: I0129 11:58:09.156552 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:09.157464 kubelet[2744]: I0129 11:58:09.157442 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:09.157524 containerd[1579]: time="2025-01-29T11:58:09.157465939Z" level=info msg="StopPodSandbox for \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\"" Jan 29 11:58:09.157858 containerd[1579]: time="2025-01-29T11:58:09.157697464Z" level=info msg="Ensure that sandbox 279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd in task-service has been cleanup successfully" Jan 29 11:58:09.157897 containerd[1579]: time="2025-01-29T11:58:09.157849439Z" level=info msg="StopPodSandbox for \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\"" Jan 29 11:58:09.158122 containerd[1579]: time="2025-01-29T11:58:09.158098737Z" level=info msg="Ensure that sandbox 7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428 in task-service has been cleanup successfully" Jan 29 11:58:09.159123 kubelet[2744]: I0129 11:58:09.159079 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:09.159581 containerd[1579]: time="2025-01-29T11:58:09.159520948Z" level=info msg="StopPodSandbox for \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\"" Jan 29 11:58:09.159789 containerd[1579]: time="2025-01-29T11:58:09.159705575Z" level=info msg="Ensure that sandbox dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816 in task-service has been cleanup successfully" Jan 29 11:58:09.161249 kubelet[2744]: I0129 11:58:09.161179 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:09.163006 containerd[1579]: time="2025-01-29T11:58:09.162503941Z" level=info msg="StopPodSandbox for \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\"" Jan 29 11:58:09.163006 containerd[1579]: time="2025-01-29T11:58:09.162732561Z" level=info msg="Ensure that sandbox e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46 in task-service has been cleanup successfully" Jan 29 11:58:09.163141 kubelet[2744]: I0129 11:58:09.162563 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:09.171718 containerd[1579]: time="2025-01-29T11:58:09.171669777Z" level=info msg="StopPodSandbox for \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\"" Jan 29 11:58:09.171931 containerd[1579]: time="2025-01-29T11:58:09.171854453Z" level=info msg="Ensure that sandbox 4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf in task-service has been cleanup successfully" Jan 29 11:58:09.214434 containerd[1579]: time="2025-01-29T11:58:09.214219995Z" level=error msg="StopPodSandbox for \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\" failed" error="failed to destroy network for sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:09.215078 kubelet[2744]: E0129 11:58:09.214826 2744 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:09.215078 kubelet[2744]: E0129 11:58:09.214915 2744 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46"} Jan 29 11:58:09.215078 kubelet[2744]: E0129 11:58:09.215003 2744 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"442c7a1a-1bf4-4799-9255-bae8a191ac48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:58:09.215078 kubelet[2744]: E0129 11:58:09.215036 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"442c7a1a-1bf4-4799-9255-bae8a191ac48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c5f74b87-vk8s6" podUID="442c7a1a-1bf4-4799-9255-bae8a191ac48" Jan 29 11:58:09.219394 containerd[1579]: time="2025-01-29T11:58:09.219183197Z" level=error msg="StopPodSandbox for \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\" failed" error="failed to destroy network for sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:09.219806 kubelet[2744]: E0129 11:58:09.219682 2744 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:09.219992 kubelet[2744]: E0129 11:58:09.219770 2744 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428"} Jan 29 11:58:09.219992 kubelet[2744]: E0129 11:58:09.219935 2744 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26202b2c-e1f1-4083-9026-183a5e92161f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:58:09.220330 containerd[1579]: time="2025-01-29T11:58:09.219929489Z" level=error msg="StopPodSandbox for \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\" failed" error="failed to destroy network for sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:09.220373 kubelet[2744]: E0129 11:58:09.220128 2744 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:09.220373 kubelet[2744]: E0129 11:58:09.220158 2744 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd"} Jan 29 11:58:09.220373 kubelet[2744]: E0129 11:58:09.220190 2744 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2baaae63-eb27-4f62-99d9-91a996a907b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:58:09.220373 kubelet[2744]: E0129 11:58:09.220222 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2baaae63-eb27-4f62-99d9-91a996a907b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-sp9qb" podUID="2baaae63-eb27-4f62-99d9-91a996a907b5" Jan 29 11:58:09.220652 kubelet[2744]: E0129 11:58:09.220579 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26202b2c-e1f1-4083-9026-183a5e92161f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77c5f74b87-wnvvw" podUID="26202b2c-e1f1-4083-9026-183a5e92161f" Jan 29 11:58:09.222508 containerd[1579]: time="2025-01-29T11:58:09.222472676Z" level=error msg="StopPodSandbox for \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\" failed" error="failed to destroy network for sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:09.222807 kubelet[2744]: E0129 11:58:09.222765 2744 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:09.222927 kubelet[2744]: E0129 11:58:09.222893 2744 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816"} Jan 29 11:58:09.222927 kubelet[2744]: E0129 11:58:09.222932 2744 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eaeb6ba2-4292-4dd6-9e36-a5452f96f08f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:58:09.223193 kubelet[2744]: E0129 11:58:09.222958 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eaeb6ba2-4292-4dd6-9e36-a5452f96f08f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c7dbf75df-clqxh" podUID="eaeb6ba2-4292-4dd6-9e36-a5452f96f08f" Jan 29 11:58:09.228107 containerd[1579]: time="2025-01-29T11:58:09.228045262Z" level=error msg="StopPodSandbox for \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\" failed" error="failed to destroy network for sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:09.228361 kubelet[2744]: E0129 11:58:09.228305 2744 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:09.228361 kubelet[2744]: E0129 11:58:09.228354 2744 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf"} Jan 29 11:58:09.228491 kubelet[2744]: E0129 11:58:09.228387 2744 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"562f7dc1-fccc-4836-832d-33f596ec71b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:58:09.228491 kubelet[2744]: E0129 11:58:09.228416 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"562f7dc1-fccc-4836-832d-33f596ec71b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-29qqz" podUID="562f7dc1-fccc-4836-832d-33f596ec71b8" Jan 29 11:58:09.780169 containerd[1579]: time="2025-01-29T11:58:09.779807975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dmc9g,Uid:4b8569b7-17f3-41f5-af84-56efb8c2c37a,Namespace:calico-system,Attempt:0,}" Jan 29 11:58:09.809746 systemd-journald[1154]: Under memory pressure, flushing caches. Jan 29 11:58:09.806879 systemd-resolved[1455]: Under memory pressure, flushing caches. Jan 29 11:58:09.806914 systemd-resolved[1455]: Flushed all caches. Jan 29 11:58:09.996114 containerd[1579]: time="2025-01-29T11:58:09.996044195Z" level=error msg="Failed to destroy network for sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:09.996647 containerd[1579]: time="2025-01-29T11:58:09.996621629Z" level=error msg="encountered an error cleaning up failed sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:09.996712 containerd[1579]: time="2025-01-29T11:58:09.996685750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dmc9g,Uid:4b8569b7-17f3-41f5-af84-56efb8c2c37a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:09.997064 kubelet[2744]: E0129 11:58:09.996993 2744 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:09.997178 kubelet[2744]: E0129 11:58:09.997090 2744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dmc9g" Jan 29 11:58:09.997178 kubelet[2744]: E0129 11:58:09.997114 2744 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dmc9g" Jan 29 11:58:09.997228 kubelet[2744]: E0129 11:58:09.997164 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dmc9g_calico-system(4b8569b7-17f3-41f5-af84-56efb8c2c37a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dmc9g_calico-system(4b8569b7-17f3-41f5-af84-56efb8c2c37a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dmc9g" podUID="4b8569b7-17f3-41f5-af84-56efb8c2c37a" Jan 29 11:58:09.998784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a-shm.mount: Deactivated successfully. Jan 29 11:58:10.165202 kubelet[2744]: I0129 11:58:10.165073 2744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:10.165681 containerd[1579]: time="2025-01-29T11:58:10.165648556Z" level=info msg="StopPodSandbox for \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\"" Jan 29 11:58:10.165948 containerd[1579]: time="2025-01-29T11:58:10.165831549Z" level=info msg="Ensure that sandbox 93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a in task-service has been cleanup successfully" Jan 29 11:58:10.347848 containerd[1579]: time="2025-01-29T11:58:10.347792716Z" level=error msg="StopPodSandbox for \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\" failed" error="failed to destroy network for sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:58:10.348323 kubelet[2744]: E0129 11:58:10.348243 2744 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:10.348398 kubelet[2744]: E0129 11:58:10.348316 2744 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a"} Jan 29 11:58:10.348398 kubelet[2744]: E0129 11:58:10.348352 2744 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b8569b7-17f3-41f5-af84-56efb8c2c37a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 11:58:10.348398 kubelet[2744]: E0129 11:58:10.348377 2744 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b8569b7-17f3-41f5-af84-56efb8c2c37a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dmc9g" podUID="4b8569b7-17f3-41f5-af84-56efb8c2c37a" Jan 29 11:58:10.998207 kubelet[2744]: I0129 11:58:10.998161 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:58:10.998867 kubelet[2744]: E0129 11:58:10.998837 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:11.168516 kubelet[2744]: E0129 11:58:11.168479 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:12.279659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822373442.mount: Deactivated successfully. Jan 29 11:58:13.322117 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:40450.service - OpenSSH per-connection server daemon (10.0.0.1:40450). Jan 29 11:58:13.334168 containerd[1579]: time="2025-01-29T11:58:13.334092935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:13.335117 containerd[1579]: time="2025-01-29T11:58:13.335069218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:58:13.342858 containerd[1579]: time="2025-01-29T11:58:13.336462403Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:13.343093 containerd[1579]: time="2025-01-29T11:58:13.339307555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.184317119s" Jan 29 11:58:13.343093 containerd[1579]: time="2025-01-29T11:58:13.343029503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:58:13.343766 containerd[1579]: time="2025-01-29T11:58:13.343708638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:13.354865 containerd[1579]: time="2025-01-29T11:58:13.354814129Z" level=info msg="CreateContainer within sandbox \"e9fdf0efac635d28c7fa3ec362e1b46893f6c9a192fb4d4b138d560eb136e475\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:58:13.373217 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 40450 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:13.374262 containerd[1579]: time="2025-01-29T11:58:13.374208992Z" level=info msg="CreateContainer within sandbox \"e9fdf0efac635d28c7fa3ec362e1b46893f6c9a192fb4d4b138d560eb136e475\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c0962d1af333a057c7f866a3d0ab3fe57eb77cdffd9cf55a243316786b28cecd\"" Jan 29 11:58:13.376040 containerd[1579]: time="2025-01-29T11:58:13.375204902Z" level=info msg="StartContainer for \"c0962d1af333a057c7f866a3d0ab3fe57eb77cdffd9cf55a243316786b28cecd\"" Jan 29 11:58:13.375567 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:13.382092 systemd-logind[1565]: New session 9 of user core. Jan 29 11:58:13.387976 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:58:13.731233 containerd[1579]: time="2025-01-29T11:58:13.730897801Z" level=info msg="StartContainer for \"c0962d1af333a057c7f866a3d0ab3fe57eb77cdffd9cf55a243316786b28cecd\" returns successfully" Jan 29 11:58:13.756568 sshd[3914]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:13.756895 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:58:13.756940 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:58:13.762828 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:40450.service: Deactivated successfully. Jan 29 11:58:13.766564 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:58:13.767001 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:58:13.768688 systemd-logind[1565]: Removed session 9. Jan 29 11:58:14.178159 kubelet[2744]: E0129 11:58:14.178036 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:14.194478 kubelet[2744]: I0129 11:58:14.194404 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-55n9x" podStartSLOduration=1.408628409 podStartE2EDuration="17.194384803s" podCreationTimestamp="2025-01-29 11:57:57 +0000 UTC" firstStartedPulling="2025-01-29 11:57:57.558958303 +0000 UTC m=+21.888605586" lastFinishedPulling="2025-01-29 11:58:13.344714697 +0000 UTC m=+37.674361980" observedRunningTime="2025-01-29 11:58:14.194056716 +0000 UTC m=+38.523704030" watchObservedRunningTime="2025-01-29 11:58:14.194384803 +0000 UTC m=+38.524032086" Jan 29 11:58:15.241649 kernel: bpftool[4127]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:58:15.470913 systemd-networkd[1242]: vxlan.calico: Link UP Jan 29 11:58:15.470922 systemd-networkd[1242]: vxlan.calico: Gained carrier Jan 29 11:58:16.590980 systemd-networkd[1242]: vxlan.calico: Gained IPv6LL Jan 29 11:58:18.767879 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:40452.service - OpenSSH per-connection server daemon (10.0.0.1:40452). Jan 29 11:58:18.831968 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 40452 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:18.834061 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:18.838742 systemd-logind[1565]: New session 10 of user core. Jan 29 11:58:18.853137 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:58:18.998843 sshd[4201]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:19.003496 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:40452.service: Deactivated successfully. Jan 29 11:58:19.006185 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:58:19.006227 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:58:19.007820 systemd-logind[1565]: Removed session 10. Jan 29 11:58:20.775133 containerd[1579]: time="2025-01-29T11:58:20.774727051Z" level=info msg="StopPodSandbox for \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\"" Jan 29 11:58:20.777263 containerd[1579]: time="2025-01-29T11:58:20.774946433Z" level=info msg="StopPodSandbox for \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\"" Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.828 [INFO][4257] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.828 [INFO][4257] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" iface="eth0" netns="/var/run/netns/cni-7fec201a-7116-ea52-de40-9e4de29bed2a" Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.829 [INFO][4257] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" iface="eth0" netns="/var/run/netns/cni-7fec201a-7116-ea52-de40-9e4de29bed2a" Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.830 [INFO][4257] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" iface="eth0" netns="/var/run/netns/cni-7fec201a-7116-ea52-de40-9e4de29bed2a" Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.830 [INFO][4257] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.830 [INFO][4257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.893 [INFO][4273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" HandleID="k8s-pod-network.e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.893 [INFO][4273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.893 [INFO][4273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.902 [WARNING][4273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" HandleID="k8s-pod-network.e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.902 [INFO][4273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" HandleID="k8s-pod-network.e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.904 [INFO][4273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:20.909779 containerd[1579]: 2025-01-29 11:58:20.907 [INFO][4257] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:20.922023 containerd[1579]: time="2025-01-29T11:58:20.921961459Z" level=info msg="TearDown network for sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\" successfully" Jan 29 11:58:20.922023 containerd[1579]: time="2025-01-29T11:58:20.922010701Z" level=info msg="StopPodSandbox for \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\" returns successfully" Jan 29 11:58:20.924962 containerd[1579]: time="2025-01-29T11:58:20.924916384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5f74b87-vk8s6,Uid:442c7a1a-1bf4-4799-9255-bae8a191ac48,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:58:20.925929 systemd[1]: run-netns-cni\x2d7fec201a\x2d7116\x2dea52\x2dde40\x2d9e4de29bed2a.mount: Deactivated successfully. Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.833 [INFO][4258] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.833 [INFO][4258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" iface="eth0" netns="/var/run/netns/cni-0a88a85e-c32f-82ef-aaf8-08546ffa4e80" Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.833 [INFO][4258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" iface="eth0" netns="/var/run/netns/cni-0a88a85e-c32f-82ef-aaf8-08546ffa4e80" Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.833 [INFO][4258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" iface="eth0" netns="/var/run/netns/cni-0a88a85e-c32f-82ef-aaf8-08546ffa4e80" Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.833 [INFO][4258] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.833 [INFO][4258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.893 [INFO][4274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" HandleID="k8s-pod-network.dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.893 [INFO][4274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.904 [INFO][4274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.909 [WARNING][4274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" HandleID="k8s-pod-network.dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.909 [INFO][4274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" HandleID="k8s-pod-network.dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.922 [INFO][4274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:20.951467 containerd[1579]: 2025-01-29 11:58:20.948 [INFO][4258] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:20.951927 containerd[1579]: time="2025-01-29T11:58:20.951645764Z" level=info msg="TearDown network for sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\" successfully" Jan 29 11:58:20.951927 containerd[1579]: time="2025-01-29T11:58:20.951679377Z" level=info msg="StopPodSandbox for \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\" returns successfully" Jan 29 11:58:20.952529 containerd[1579]: time="2025-01-29T11:58:20.952481131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7dbf75df-clqxh,Uid:eaeb6ba2-4292-4dd6-9e36-a5452f96f08f,Namespace:calico-system,Attempt:1,}" Jan 29 11:58:20.955330 systemd[1]: run-netns-cni\x2d0a88a85e\x2dc32f\x2d82ef\x2daaf8\x2d08546ffa4e80.mount: Deactivated successfully. Jan 29 11:58:21.165384 systemd-networkd[1242]: caliafb557aa3fc: Link UP Jan 29 11:58:21.165584 systemd-networkd[1242]: caliafb557aa3fc: Gained carrier Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.021 [INFO][4288] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0 calico-apiserver-77c5f74b87- calico-apiserver 442c7a1a-1bf4-4799-9255-bae8a191ac48 843 0 2025-01-29 11:57:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c5f74b87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c5f74b87-vk8s6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliafb557aa3fc [] []}} ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-vk8s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.021 [INFO][4288] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-vk8s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.068 [INFO][4317] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" HandleID="k8s-pod-network.1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.079 [INFO][4317] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" HandleID="k8s-pod-network.1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000132940), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c5f74b87-vk8s6", "timestamp":"2025-01-29 11:58:21.068982409 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.079 [INFO][4317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.079 [INFO][4317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.079 [INFO][4317] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.081 [INFO][4317] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" host="localhost" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.086 [INFO][4317] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.090 [INFO][4317] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.092 [INFO][4317] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.094 [INFO][4317] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.094 [INFO][4317] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" host="localhost" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.096 [INFO][4317] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22 Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.123 [INFO][4317] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" host="localhost" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.158 [INFO][4317] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" host="localhost" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.158 [INFO][4317] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" host="localhost" Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.158 [INFO][4317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:21.206453 containerd[1579]: 2025-01-29 11:58:21.158 [INFO][4317] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" HandleID="k8s-pod-network.1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:21.207323 containerd[1579]: 2025-01-29 11:58:21.161 [INFO][4288] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-vk8s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0", GenerateName:"calico-apiserver-77c5f74b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"442c7a1a-1bf4-4799-9255-bae8a191ac48", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5f74b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c5f74b87-vk8s6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliafb557aa3fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:21.207323 containerd[1579]: 2025-01-29 11:58:21.161 [INFO][4288] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-vk8s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:21.207323 containerd[1579]: 2025-01-29 11:58:21.161 [INFO][4288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafb557aa3fc ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-vk8s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:21.207323 containerd[1579]: 2025-01-29 11:58:21.164 [INFO][4288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-vk8s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:21.207323 containerd[1579]: 2025-01-29 11:58:21.165 [INFO][4288] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-vk8s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0", GenerateName:"calico-apiserver-77c5f74b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"442c7a1a-1bf4-4799-9255-bae8a191ac48", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5f74b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22", Pod:"calico-apiserver-77c5f74b87-vk8s6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliafb557aa3fc", MAC:"6e:e7:d3:a3:96:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:21.207323 containerd[1579]: 2025-01-29 11:58:21.203 [INFO][4288] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-vk8s6" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:21.299337 containerd[1579]: time="2025-01-29T11:58:21.298932076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:21.299337 containerd[1579]: time="2025-01-29T11:58:21.298991027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:21.299337 containerd[1579]: time="2025-01-29T11:58:21.299009832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:21.299798 containerd[1579]: time="2025-01-29T11:58:21.299738720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:21.327108 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:58:21.362578 containerd[1579]: time="2025-01-29T11:58:21.361553234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5f74b87-vk8s6,Uid:442c7a1a-1bf4-4799-9255-bae8a191ac48,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22\"" Jan 29 11:58:21.366449 containerd[1579]: time="2025-01-29T11:58:21.366406591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:58:21.373397 systemd-networkd[1242]: calicce6e605487: Link UP Jan 29 11:58:21.374349 systemd-networkd[1242]: calicce6e605487: Gained carrier Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.029 [INFO][4302] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0 calico-kube-controllers-7c7dbf75df- calico-system eaeb6ba2-4292-4dd6-9e36-a5452f96f08f 844 0 2025-01-29 11:57:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c7dbf75df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c7dbf75df-clqxh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicce6e605487 [] []}} ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Namespace="calico-system" Pod="calico-kube-controllers-7c7dbf75df-clqxh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.029 [INFO][4302] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Namespace="calico-system" Pod="calico-kube-controllers-7c7dbf75df-clqxh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.069 [INFO][4316] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" HandleID="k8s-pod-network.8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.079 [INFO][4316] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" HandleID="k8s-pod-network.8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c7dbf75df-clqxh", "timestamp":"2025-01-29 11:58:21.069440589 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.080 [INFO][4316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.158 [INFO][4316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.158 [INFO][4316] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.160 [INFO][4316] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" host="localhost" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.164 [INFO][4316] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.169 [INFO][4316] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.202 [INFO][4316] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.205 [INFO][4316] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.205 [INFO][4316] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" host="localhost" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.206 [INFO][4316] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.284 [INFO][4316] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" host="localhost" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.365 [INFO][4316] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" host="localhost" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.365 [INFO][4316] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" host="localhost" Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.365 [INFO][4316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:21.387028 containerd[1579]: 2025-01-29 11:58:21.365 [INFO][4316] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" HandleID="k8s-pod-network.8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:21.387635 containerd[1579]: 2025-01-29 11:58:21.369 [INFO][4302] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Namespace="calico-system" Pod="calico-kube-controllers-7c7dbf75df-clqxh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0", GenerateName:"calico-kube-controllers-7c7dbf75df-", Namespace:"calico-system", SelfLink:"", UID:"eaeb6ba2-4292-4dd6-9e36-a5452f96f08f", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c7dbf75df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c7dbf75df-clqxh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicce6e605487", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:21.387635 containerd[1579]: 2025-01-29 11:58:21.369 [INFO][4302] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Namespace="calico-system" Pod="calico-kube-controllers-7c7dbf75df-clqxh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:21.387635 containerd[1579]: 2025-01-29 11:58:21.369 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicce6e605487 ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Namespace="calico-system" Pod="calico-kube-controllers-7c7dbf75df-clqxh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:21.387635 containerd[1579]: 2025-01-29 11:58:21.374 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Namespace="calico-system" Pod="calico-kube-controllers-7c7dbf75df-clqxh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:21.387635 containerd[1579]: 2025-01-29 11:58:21.374 [INFO][4302] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Namespace="calico-system" Pod="calico-kube-controllers-7c7dbf75df-clqxh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0", GenerateName:"calico-kube-controllers-7c7dbf75df-", Namespace:"calico-system", SelfLink:"", UID:"eaeb6ba2-4292-4dd6-9e36-a5452f96f08f", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c7dbf75df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b", Pod:"calico-kube-controllers-7c7dbf75df-clqxh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicce6e605487", MAC:"26:3f:1d:20:47:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:21.387635 containerd[1579]: 2025-01-29 11:58:21.383 [INFO][4302] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b" Namespace="calico-system" Pod="calico-kube-controllers-7c7dbf75df-clqxh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:21.409076 containerd[1579]: time="2025-01-29T11:58:21.408919956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:21.409657 containerd[1579]: time="2025-01-29T11:58:21.409564476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:21.409657 containerd[1579]: time="2025-01-29T11:58:21.409584113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:21.409762 containerd[1579]: time="2025-01-29T11:58:21.409743181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:21.433646 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:58:21.461639 containerd[1579]: time="2025-01-29T11:58:21.461562911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c7dbf75df-clqxh,Uid:eaeb6ba2-4292-4dd6-9e36-a5452f96f08f,Namespace:calico-system,Attempt:1,} returns sandbox id \"8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b\"" Jan 29 11:58:21.774581 containerd[1579]: time="2025-01-29T11:58:21.774455989Z" level=info msg="StopPodSandbox for \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\"" Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.821 [INFO][4461] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.821 [INFO][4461] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" iface="eth0" netns="/var/run/netns/cni-23388a9f-7b2a-d94b-c427-d48bb8d15c41" Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.821 [INFO][4461] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" iface="eth0" netns="/var/run/netns/cni-23388a9f-7b2a-d94b-c427-d48bb8d15c41" Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.821 [INFO][4461] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" iface="eth0" netns="/var/run/netns/cni-23388a9f-7b2a-d94b-c427-d48bb8d15c41" Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.821 [INFO][4461] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.822 [INFO][4461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.844 [INFO][4469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" HandleID="k8s-pod-network.279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.845 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.845 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.850 [WARNING][4469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" HandleID="k8s-pod-network.279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.850 [INFO][4469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" HandleID="k8s-pod-network.279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.851 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:21.858394 containerd[1579]: 2025-01-29 11:58:21.855 [INFO][4461] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:21.859330 containerd[1579]: time="2025-01-29T11:58:21.858537296Z" level=info msg="TearDown network for sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\" successfully" Jan 29 11:58:21.859330 containerd[1579]: time="2025-01-29T11:58:21.858565409Z" level=info msg="StopPodSandbox for \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\" returns successfully" Jan 29 11:58:21.859390 kubelet[2744]: E0129 11:58:21.858982 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:21.859847 containerd[1579]: time="2025-01-29T11:58:21.859808722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sp9qb,Uid:2baaae63-eb27-4f62-99d9-91a996a907b5,Namespace:kube-system,Attempt:1,}" Jan 29 11:58:21.921651 systemd[1]: run-netns-cni\x2d23388a9f\x2d7b2a\x2dd94b\x2dc427\x2dd48bb8d15c41.mount: Deactivated successfully. Jan 29 11:58:21.970214 systemd-networkd[1242]: cali47621202af8: Link UP Jan 29 11:58:21.970915 systemd-networkd[1242]: cali47621202af8: Gained carrier Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.906 [INFO][4477] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0 coredns-7db6d8ff4d- kube-system 2baaae63-eb27-4f62-99d9-91a996a907b5 860 0 2025-01-29 11:57:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-sp9qb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali47621202af8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sp9qb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--sp9qb-" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.906 [INFO][4477] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sp9qb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.936 [INFO][4490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" HandleID="k8s-pod-network.83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.944 [INFO][4490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" HandleID="k8s-pod-network.83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dc0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-sp9qb", "timestamp":"2025-01-29 11:58:21.936425443 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.944 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.944 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.944 [INFO][4490] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.946 [INFO][4490] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" host="localhost" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.949 [INFO][4490] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.952 [INFO][4490] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.954 [INFO][4490] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.956 [INFO][4490] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.956 [INFO][4490] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" host="localhost" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.957 [INFO][4490] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5 Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.960 [INFO][4490] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" host="localhost" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.964 [INFO][4490] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" host="localhost" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.964 [INFO][4490] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" host="localhost" Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.964 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:21.981708 containerd[1579]: 2025-01-29 11:58:21.964 [INFO][4490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" HandleID="k8s-pod-network.83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:21.982255 containerd[1579]: 2025-01-29 11:58:21.967 [INFO][4477] cni-plugin/k8s.go 386: Populated endpoint ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sp9qb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2baaae63-eb27-4f62-99d9-91a996a907b5", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-sp9qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47621202af8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:21.982255 containerd[1579]: 2025-01-29 11:58:21.968 [INFO][4477] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sp9qb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:21.982255 containerd[1579]: 2025-01-29 11:58:21.968 [INFO][4477] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47621202af8 ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sp9qb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:21.982255 containerd[1579]: 2025-01-29 11:58:21.970 [INFO][4477] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sp9qb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:21.982255 containerd[1579]: 2025-01-29 11:58:21.970 [INFO][4477] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sp9qb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2baaae63-eb27-4f62-99d9-91a996a907b5", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5", Pod:"coredns-7db6d8ff4d-sp9qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47621202af8", MAC:"ca:cc:86:2d:9d:72", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:21.982255 containerd[1579]: 2025-01-29 11:58:21.978 [INFO][4477] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sp9qb" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:22.001588 containerd[1579]: time="2025-01-29T11:58:22.001290390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:22.001588 containerd[1579]: time="2025-01-29T11:58:22.001379788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:22.001588 containerd[1579]: time="2025-01-29T11:58:22.001413060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:22.001833 containerd[1579]: time="2025-01-29T11:58:22.001629156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:22.030077 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:58:22.063843 containerd[1579]: time="2025-01-29T11:58:22.063789313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sp9qb,Uid:2baaae63-eb27-4f62-99d9-91a996a907b5,Namespace:kube-system,Attempt:1,} returns sandbox id \"83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5\"" Jan 29 11:58:22.064799 kubelet[2744]: E0129 11:58:22.064755 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:22.068515 containerd[1579]: time="2025-01-29T11:58:22.068468603Z" level=info msg="CreateContainer within sandbox \"83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:58:22.096219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796643240.mount: Deactivated successfully. Jan 29 11:58:22.102552 containerd[1579]: time="2025-01-29T11:58:22.102513766Z" level=info msg="CreateContainer within sandbox \"83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"532cdbb6e988335e824e4b7f697a70b089ae3ee1b183308e566f4a91a3c43d8a\"" Jan 29 11:58:22.103310 containerd[1579]: time="2025-01-29T11:58:22.103201486Z" level=info msg="StartContainer for \"532cdbb6e988335e824e4b7f697a70b089ae3ee1b183308e566f4a91a3c43d8a\"" Jan 29 11:58:22.163883 containerd[1579]: time="2025-01-29T11:58:22.163833515Z" level=info msg="StartContainer for \"532cdbb6e988335e824e4b7f697a70b089ae3ee1b183308e566f4a91a3c43d8a\" returns successfully" Jan 29 11:58:22.206073 kubelet[2744]: E0129 11:58:22.205977 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:22.219211 kubelet[2744]: I0129 11:58:22.219139 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sp9qb" podStartSLOduration=31.219121727 podStartE2EDuration="31.219121727s" podCreationTimestamp="2025-01-29 11:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:58:22.218677494 +0000 UTC m=+46.548324797" watchObservedRunningTime="2025-01-29 11:58:22.219121727 +0000 UTC m=+46.548769010" Jan 29 11:58:22.309919 kubelet[2744]: I0129 11:58:22.309741 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:58:22.310678 kubelet[2744]: E0129 11:58:22.310596 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:22.479781 systemd-networkd[1242]: calicce6e605487: Gained IPv6LL Jan 29 11:58:22.774486 containerd[1579]: time="2025-01-29T11:58:22.774409635Z" level=info msg="StopPodSandbox for \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\"" Jan 29 11:58:22.774486 containerd[1579]: time="2025-01-29T11:58:22.774465009Z" level=info msg="StopPodSandbox for \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\"" Jan 29 11:58:22.799878 systemd-networkd[1242]: caliafb557aa3fc: Gained IPv6LL Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.885 [INFO][4673] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.885 [INFO][4673] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" iface="eth0" netns="/var/run/netns/cni-f62c7b1b-2d51-a2a0-91bc-873fd97ecf40" Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.886 [INFO][4673] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" iface="eth0" netns="/var/run/netns/cni-f62c7b1b-2d51-a2a0-91bc-873fd97ecf40" Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.886 [INFO][4673] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" iface="eth0" netns="/var/run/netns/cni-f62c7b1b-2d51-a2a0-91bc-873fd97ecf40" Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.886 [INFO][4673] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.886 [INFO][4673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.910 [INFO][4693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" HandleID="k8s-pod-network.4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.910 [INFO][4693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.910 [INFO][4693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.921 [WARNING][4693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" HandleID="k8s-pod-network.4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.921 [INFO][4693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" HandleID="k8s-pod-network.4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.922 [INFO][4693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:22.932918 containerd[1579]: 2025-01-29 11:58:22.928 [INFO][4673] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:22.934746 containerd[1579]: time="2025-01-29T11:58:22.934691567Z" level=info msg="TearDown network for sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\" successfully" Jan 29 11:58:22.935219 containerd[1579]: time="2025-01-29T11:58:22.935187558Z" level=info msg="StopPodSandbox for \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\" returns successfully" Jan 29 11:58:22.937051 kubelet[2744]: E0129 11:58:22.937019 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:22.938387 systemd[1]: run-netns-cni\x2df62c7b1b\x2d2d51\x2da2a0\x2d91bc\x2d873fd97ecf40.mount: Deactivated successfully. Jan 29 11:58:22.940110 containerd[1579]: time="2025-01-29T11:58:22.940074908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-29qqz,Uid:562f7dc1-fccc-4836-832d-33f596ec71b8,Namespace:kube-system,Attempt:1,}" Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.882 [INFO][4674] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.882 [INFO][4674] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" iface="eth0" netns="/var/run/netns/cni-4eba5003-4449-eb9e-e0bd-cbd5b59569b7" Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.882 [INFO][4674] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" iface="eth0" netns="/var/run/netns/cni-4eba5003-4449-eb9e-e0bd-cbd5b59569b7" Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.883 [INFO][4674] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" iface="eth0" netns="/var/run/netns/cni-4eba5003-4449-eb9e-e0bd-cbd5b59569b7" Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.883 [INFO][4674] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.883 [INFO][4674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.918 [INFO][4691] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" HandleID="k8s-pod-network.7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.918 [INFO][4691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.922 [INFO][4691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.931 [WARNING][4691] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" HandleID="k8s-pod-network.7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.931 [INFO][4691] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" HandleID="k8s-pod-network.7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.933 [INFO][4691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:22.945777 containerd[1579]: 2025-01-29 11:58:22.938 [INFO][4674] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:22.946216 containerd[1579]: time="2025-01-29T11:58:22.946021917Z" level=info msg="TearDown network for sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\" successfully" Jan 29 11:58:22.946216 containerd[1579]: time="2025-01-29T11:58:22.946048978Z" level=info msg="StopPodSandbox for \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\" returns successfully" Jan 29 11:58:22.947663 containerd[1579]: time="2025-01-29T11:58:22.946569635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5f74b87-wnvvw,Uid:26202b2c-e1f1-4083-9026-183a5e92161f,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:58:22.949634 systemd[1]: run-netns-cni\x2d4eba5003\x2d4449\x2deb9e\x2de0bd\x2dcbd5b59569b7.mount: Deactivated successfully. Jan 29 11:58:23.094569 systemd-networkd[1242]: calif43a043a0f2: Link UP Jan 29 11:58:23.095542 systemd-networkd[1242]: calif43a043a0f2: Gained carrier Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.017 [INFO][4718] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0 calico-apiserver-77c5f74b87- calico-apiserver 26202b2c-e1f1-4083-9026-183a5e92161f 884 0 2025-01-29 11:57:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c5f74b87 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77c5f74b87-wnvvw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif43a043a0f2 [] []}} ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-wnvvw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.017 [INFO][4718] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-wnvvw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.053 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" HandleID="k8s-pod-network.c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.062 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" HandleID="k8s-pod-network.c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000294fd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77c5f74b87-wnvvw", "timestamp":"2025-01-29 11:58:23.053670514 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.062 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.062 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.062 [INFO][4739] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.064 [INFO][4739] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" host="localhost" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.068 [INFO][4739] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.071 [INFO][4739] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.074 [INFO][4739] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.076 [INFO][4739] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.076 [INFO][4739] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" host="localhost" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.077 [INFO][4739] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.081 [INFO][4739] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" host="localhost" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.086 [INFO][4739] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" host="localhost" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.086 [INFO][4739] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" host="localhost" Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.086 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:23.113908 containerd[1579]: 2025-01-29 11:58:23.086 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" HandleID="k8s-pod-network.c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:23.114470 containerd[1579]: 2025-01-29 11:58:23.089 [INFO][4718] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-wnvvw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0", GenerateName:"calico-apiserver-77c5f74b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"26202b2c-e1f1-4083-9026-183a5e92161f", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5f74b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77c5f74b87-wnvvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif43a043a0f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:23.114470 containerd[1579]: 2025-01-29 11:58:23.089 [INFO][4718] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-wnvvw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:23.114470 containerd[1579]: 2025-01-29 11:58:23.090 [INFO][4718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif43a043a0f2 ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-wnvvw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:23.114470 containerd[1579]: 2025-01-29 11:58:23.095 [INFO][4718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-wnvvw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:23.114470 containerd[1579]: 2025-01-29 11:58:23.098 [INFO][4718] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-wnvvw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0", GenerateName:"calico-apiserver-77c5f74b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"26202b2c-e1f1-4083-9026-183a5e92161f", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5f74b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc", Pod:"calico-apiserver-77c5f74b87-wnvvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif43a043a0f2", MAC:"e2:fb:6c:e0:0d:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:23.114470 containerd[1579]: 2025-01-29 11:58:23.109 [INFO][4718] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc" Namespace="calico-apiserver" Pod="calico-apiserver-77c5f74b87-wnvvw" WorkloadEndpoint="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:23.137346 systemd-networkd[1242]: cali8b8a96d4ff8: Link UP Jan 29 11:58:23.137986 systemd-networkd[1242]: cali8b8a96d4ff8: Gained carrier Jan 29 11:58:23.147536 containerd[1579]: time="2025-01-29T11:58:23.147454831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:23.147692 containerd[1579]: time="2025-01-29T11:58:23.147549418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:23.147692 containerd[1579]: time="2025-01-29T11:58:23.147597098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:23.147792 containerd[1579]: time="2025-01-29T11:58:23.147755285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.019 [INFO][4711] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0 coredns-7db6d8ff4d- kube-system 562f7dc1-fccc-4836-832d-33f596ec71b8 885 0 2025-01-29 11:57:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-29qqz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8b8a96d4ff8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29qqz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--29qqz-" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.019 [INFO][4711] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29qqz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.054 [INFO][4744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" HandleID="k8s-pod-network.c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.063 [INFO][4744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" HandleID="k8s-pod-network.c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000390820), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-29qqz", "timestamp":"2025-01-29 11:58:23.05399912 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.063 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.086 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.086 [INFO][4744] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.089 [INFO][4744] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" host="localhost" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.095 [INFO][4744] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.101 [INFO][4744] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.103 [INFO][4744] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.105 [INFO][4744] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.105 [INFO][4744] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" host="localhost" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.109 [INFO][4744] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.115 [INFO][4744] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" host="localhost" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.126 [INFO][4744] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" host="localhost" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.126 [INFO][4744] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" host="localhost" Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.126 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:23.154618 containerd[1579]: 2025-01-29 11:58:23.126 [INFO][4744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" HandleID="k8s-pod-network.c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:23.155477 containerd[1579]: 2025-01-29 11:58:23.130 [INFO][4711] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29qqz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"562f7dc1-fccc-4836-832d-33f596ec71b8", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-29qqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b8a96d4ff8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:23.155477 containerd[1579]: 2025-01-29 11:58:23.130 [INFO][4711] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29qqz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:23.155477 containerd[1579]: 2025-01-29 11:58:23.130 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b8a96d4ff8 ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29qqz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:23.155477 containerd[1579]: 2025-01-29 11:58:23.139 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29qqz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:23.155477 containerd[1579]: 2025-01-29 11:58:23.139 [INFO][4711] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29qqz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"562f7dc1-fccc-4836-832d-33f596ec71b8", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa", Pod:"coredns-7db6d8ff4d-29qqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b8a96d4ff8", MAC:"66:e5:31:0a:fe:d3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:23.155477 containerd[1579]: 2025-01-29 11:58:23.149 [INFO][4711] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-29qqz" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:23.186489 containerd[1579]: time="2025-01-29T11:58:23.186088019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:23.186489 containerd[1579]: time="2025-01-29T11:58:23.186165364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:23.186489 containerd[1579]: time="2025-01-29T11:58:23.186180623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:23.186489 containerd[1579]: time="2025-01-29T11:58:23.186274769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:23.187881 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:58:23.209009 kubelet[2744]: E0129 11:58:23.207992 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:23.210131 kubelet[2744]: E0129 11:58:23.209991 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:23.227754 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:58:23.238277 containerd[1579]: time="2025-01-29T11:58:23.238231456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c5f74b87-wnvvw,Uid:26202b2c-e1f1-4083-9026-183a5e92161f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc\"" Jan 29 11:58:23.246837 systemd-networkd[1242]: cali47621202af8: Gained IPv6LL Jan 29 11:58:23.268119 containerd[1579]: time="2025-01-29T11:58:23.268063971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-29qqz,Uid:562f7dc1-fccc-4836-832d-33f596ec71b8,Namespace:kube-system,Attempt:1,} returns sandbox id \"c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa\"" Jan 29 11:58:23.268880 kubelet[2744]: E0129 11:58:23.268848 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:23.271248 containerd[1579]: time="2025-01-29T11:58:23.270784505Z" level=info msg="CreateContainer within sandbox \"c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:58:23.509810 containerd[1579]: time="2025-01-29T11:58:23.509758674Z" level=info msg="CreateContainer within sandbox \"c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e47db3251ee17a572c05b9da2f71fb74f8dc75a50fb40cf81a384f4ec140f1b\"" Jan 29 11:58:23.510531 containerd[1579]: time="2025-01-29T11:58:23.510505134Z" level=info msg="StartContainer for \"7e47db3251ee17a572c05b9da2f71fb74f8dc75a50fb40cf81a384f4ec140f1b\"" Jan 29 11:58:23.535972 containerd[1579]: time="2025-01-29T11:58:23.535905894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:23.537202 containerd[1579]: time="2025-01-29T11:58:23.536922501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:58:23.538690 containerd[1579]: time="2025-01-29T11:58:23.538656916Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:23.541324 containerd[1579]: time="2025-01-29T11:58:23.541202011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:23.542235 containerd[1579]: time="2025-01-29T11:58:23.541743638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.175281843s" Jan 29 11:58:23.542235 containerd[1579]: time="2025-01-29T11:58:23.541780106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:58:23.544812 containerd[1579]: time="2025-01-29T11:58:23.544771359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:58:23.546640 containerd[1579]: time="2025-01-29T11:58:23.546514168Z" level=info msg="CreateContainer within sandbox \"1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:58:23.563886 containerd[1579]: time="2025-01-29T11:58:23.563837292Z" level=info msg="CreateContainer within sandbox \"1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4dd01575e83f8382c3dd87cd3ffa4632315560840e4b2ee0078f5370e2ce30d1\"" Jan 29 11:58:23.564848 containerd[1579]: time="2025-01-29T11:58:23.564490839Z" level=info msg="StartContainer for \"4dd01575e83f8382c3dd87cd3ffa4632315560840e4b2ee0078f5370e2ce30d1\"" Jan 29 11:58:23.579418 containerd[1579]: time="2025-01-29T11:58:23.579281881Z" level=info msg="StartContainer for \"7e47db3251ee17a572c05b9da2f71fb74f8dc75a50fb40cf81a384f4ec140f1b\" returns successfully" Jan 29 11:58:23.706178 containerd[1579]: time="2025-01-29T11:58:23.706090302Z" level=info msg="StartContainer for \"4dd01575e83f8382c3dd87cd3ffa4632315560840e4b2ee0078f5370e2ce30d1\" returns successfully" Jan 29 11:58:24.006877 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:34690.service - OpenSSH per-connection server daemon (10.0.0.1:34690). Jan 29 11:58:24.075933 sshd[4947]: Accepted publickey for core from 10.0.0.1 port 34690 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:24.077923 sshd[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:24.091651 systemd-logind[1565]: New session 11 of user core. Jan 29 11:58:24.098918 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:58:24.218712 kubelet[2744]: E0129 11:58:24.217631 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:24.218712 kubelet[2744]: E0129 11:58:24.218213 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:24.245586 sshd[4947]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:24.254986 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:34698.service - OpenSSH per-connection server daemon (10.0.0.1:34698). Jan 29 11:58:24.255645 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:34690.service: Deactivated successfully. Jan 29 11:58:24.260036 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:58:24.265924 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:58:24.269530 systemd-logind[1565]: Removed session 11. Jan 29 11:58:24.284327 kubelet[2744]: I0129 11:58:24.284250 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77c5f74b87-vk8s6" podStartSLOduration=26.105219616 podStartE2EDuration="28.284225597s" podCreationTimestamp="2025-01-29 11:57:56 +0000 UTC" firstStartedPulling="2025-01-29 11:58:21.365456287 +0000 UTC m=+45.695103570" lastFinishedPulling="2025-01-29 11:58:23.544462268 +0000 UTC m=+47.874109551" observedRunningTime="2025-01-29 11:58:24.264301545 +0000 UTC m=+48.593948838" watchObservedRunningTime="2025-01-29 11:58:24.284225597 +0000 UTC m=+48.613872880" Jan 29 11:58:24.284755 kubelet[2744]: I0129 11:58:24.284720 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-29qqz" podStartSLOduration=33.28471229 podStartE2EDuration="33.28471229s" podCreationTimestamp="2025-01-29 11:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:58:24.284092026 +0000 UTC m=+48.613739309" watchObservedRunningTime="2025-01-29 11:58:24.28471229 +0000 UTC m=+48.614359573" Jan 29 11:58:24.296653 sshd[4961]: Accepted publickey for core from 10.0.0.1 port 34698 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:24.298777 sshd[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:24.306789 systemd-logind[1565]: New session 12 of user core. Jan 29 11:58:24.313084 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:58:24.463041 systemd-networkd[1242]: cali8b8a96d4ff8: Gained IPv6LL Jan 29 11:58:24.488317 sshd[4961]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:24.497886 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:34706.service - OpenSSH per-connection server daemon (10.0.0.1:34706). Jan 29 11:58:24.498582 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:34698.service: Deactivated successfully. Jan 29 11:58:24.505738 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:58:24.506265 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:58:24.507635 systemd-logind[1565]: Removed session 12. Jan 29 11:58:24.620479 sshd[4979]: Accepted publickey for core from 10.0.0.1 port 34706 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:24.622534 sshd[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:24.628192 systemd-logind[1565]: New session 13 of user core. Jan 29 11:58:24.637111 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:58:24.774682 containerd[1579]: time="2025-01-29T11:58:24.774067780Z" level=info msg="StopPodSandbox for \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\"" Jan 29 11:58:24.796981 sshd[4979]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:24.801394 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:34706.service: Deactivated successfully. Jan 29 11:58:24.806156 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:58:24.807790 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:58:24.808860 systemd-logind[1565]: Removed session 13. Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.855 [INFO][5011] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.856 [INFO][5011] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" iface="eth0" netns="/var/run/netns/cni-21b508d5-4e5e-ccd6-da9a-04f2269885cd" Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.856 [INFO][5011] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" iface="eth0" netns="/var/run/netns/cni-21b508d5-4e5e-ccd6-da9a-04f2269885cd" Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.857 [INFO][5011] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" iface="eth0" netns="/var/run/netns/cni-21b508d5-4e5e-ccd6-da9a-04f2269885cd" Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.857 [INFO][5011] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.857 [INFO][5011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.879 [INFO][5022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" HandleID="k8s-pod-network.93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.879 [INFO][5022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.879 [INFO][5022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.885 [WARNING][5022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" HandleID="k8s-pod-network.93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.885 [INFO][5022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" HandleID="k8s-pod-network.93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.887 [INFO][5022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:24.892960 containerd[1579]: 2025-01-29 11:58:24.890 [INFO][5011] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:24.893987 containerd[1579]: time="2025-01-29T11:58:24.893088810Z" level=info msg="TearDown network for sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\" successfully" Jan 29 11:58:24.893987 containerd[1579]: time="2025-01-29T11:58:24.893120690Z" level=info msg="StopPodSandbox for \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\" returns successfully" Jan 29 11:58:24.894378 containerd[1579]: time="2025-01-29T11:58:24.894339917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dmc9g,Uid:4b8569b7-17f3-41f5-af84-56efb8c2c37a,Namespace:calico-system,Attempt:1,}" Jan 29 11:58:24.897420 systemd[1]: run-netns-cni\x2d21b508d5\x2d4e5e\x2dccd6\x2dda9a\x2d04f2269885cd.mount: Deactivated successfully. Jan 29 11:58:25.027461 systemd-networkd[1242]: caliaf92ea63854: Link UP Jan 29 11:58:25.028945 systemd-networkd[1242]: caliaf92ea63854: Gained carrier Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:24.952 [INFO][5036] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dmc9g-eth0 csi-node-driver- calico-system 4b8569b7-17f3-41f5-af84-56efb8c2c37a 938 0 2025-01-29 11:57:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dmc9g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaf92ea63854 [] []}} ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Namespace="calico-system" Pod="csi-node-driver-dmc9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--dmc9g-" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:24.953 [INFO][5036] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Namespace="calico-system" Pod="csi-node-driver-dmc9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:24.980 [INFO][5044] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" HandleID="k8s-pod-network.6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:24.989 [INFO][5044] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" HandleID="k8s-pod-network.6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e2f20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dmc9g", "timestamp":"2025-01-29 11:58:24.980243392 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:24.989 [INFO][5044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:24.989 [INFO][5044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:24.989 [INFO][5044] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:24.992 [INFO][5044] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" host="localhost" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:24.995 [INFO][5044] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:25.000 [INFO][5044] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:25.001 [INFO][5044] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:25.003 [INFO][5044] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:25.003 [INFO][5044] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" host="localhost" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:25.005 [INFO][5044] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30 Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:25.009 [INFO][5044] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" host="localhost" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:25.017 [INFO][5044] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" host="localhost" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:25.017 [INFO][5044] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" host="localhost" Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:25.017 [INFO][5044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:25.047453 containerd[1579]: 2025-01-29 11:58:25.017 [INFO][5044] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" HandleID="k8s-pod-network.6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:25.048381 containerd[1579]: 2025-01-29 11:58:25.023 [INFO][5036] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Namespace="calico-system" Pod="csi-node-driver-dmc9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--dmc9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dmc9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b8569b7-17f3-41f5-af84-56efb8c2c37a", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dmc9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf92ea63854", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:25.048381 containerd[1579]: 2025-01-29 11:58:25.024 [INFO][5036] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Namespace="calico-system" Pod="csi-node-driver-dmc9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:25.048381 containerd[1579]: 2025-01-29 11:58:25.024 [INFO][5036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf92ea63854 ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Namespace="calico-system" Pod="csi-node-driver-dmc9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:25.048381 containerd[1579]: 2025-01-29 11:58:25.030 [INFO][5036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Namespace="calico-system" Pod="csi-node-driver-dmc9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:25.048381 containerd[1579]: 2025-01-29 11:58:25.031 [INFO][5036] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Namespace="calico-system" Pod="csi-node-driver-dmc9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--dmc9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dmc9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b8569b7-17f3-41f5-af84-56efb8c2c37a", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30", Pod:"csi-node-driver-dmc9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf92ea63854", MAC:"7a:e2:e2:2d:8f:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:25.048381 containerd[1579]: 2025-01-29 11:58:25.044 [INFO][5036] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30" Namespace="calico-system" Pod="csi-node-driver-dmc9g" WorkloadEndpoint="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:25.078207 containerd[1579]: time="2025-01-29T11:58:25.077237639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:58:25.078207 containerd[1579]: time="2025-01-29T11:58:25.077882398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:58:25.078207 containerd[1579]: time="2025-01-29T11:58:25.077912204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:25.078207 containerd[1579]: time="2025-01-29T11:58:25.078084828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:58:25.107734 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:58:25.122020 containerd[1579]: time="2025-01-29T11:58:25.121979751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dmc9g,Uid:4b8569b7-17f3-41f5-af84-56efb8c2c37a,Namespace:calico-system,Attempt:1,} returns sandbox id \"6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30\"" Jan 29 11:58:25.167590 systemd-networkd[1242]: calif43a043a0f2: Gained IPv6LL Jan 29 11:58:25.222584 kubelet[2744]: I0129 11:58:25.222490 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:58:25.223221 kubelet[2744]: E0129 11:58:25.223200 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:26.131679 containerd[1579]: time="2025-01-29T11:58:26.131619855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:26.133776 containerd[1579]: time="2025-01-29T11:58:26.133735184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:58:26.135643 containerd[1579]: time="2025-01-29T11:58:26.135618988Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:26.138376 containerd[1579]: time="2025-01-29T11:58:26.138317972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:26.138957 containerd[1579]: time="2025-01-29T11:58:26.138916003Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.594104339s" Jan 29 11:58:26.138957 containerd[1579]: time="2025-01-29T11:58:26.138953824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:58:26.140250 containerd[1579]: time="2025-01-29T11:58:26.139973938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:58:26.150993 containerd[1579]: time="2025-01-29T11:58:26.150949569Z" level=info msg="CreateContainer within sandbox \"8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:58:26.171055 containerd[1579]: time="2025-01-29T11:58:26.170975470Z" level=info msg="CreateContainer within sandbox \"8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"817bc7f8ca272e294035aa9f50c4d2c65bb39d37dfe298d1a5e7e0dc28e0e11f\"" Jan 29 11:58:26.171637 containerd[1579]: time="2025-01-29T11:58:26.171590473Z" level=info msg="StartContainer for \"817bc7f8ca272e294035aa9f50c4d2c65bb39d37dfe298d1a5e7e0dc28e0e11f\"" Jan 29 11:58:26.227198 kubelet[2744]: E0129 11:58:26.227161 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:58:26.435195 containerd[1579]: time="2025-01-29T11:58:26.435048229Z" level=info msg="StartContainer for \"817bc7f8ca272e294035aa9f50c4d2c65bb39d37dfe298d1a5e7e0dc28e0e11f\" returns successfully" Jan 29 11:58:26.510891 systemd-networkd[1242]: caliaf92ea63854: Gained IPv6LL Jan 29 11:58:26.648589 containerd[1579]: time="2025-01-29T11:58:26.647633786Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:26.648804 containerd[1579]: time="2025-01-29T11:58:26.648719894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:58:26.650934 containerd[1579]: time="2025-01-29T11:58:26.650906396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 510.905207ms" Jan 29 11:58:26.651008 containerd[1579]: time="2025-01-29T11:58:26.650937394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:58:26.651950 containerd[1579]: time="2025-01-29T11:58:26.651915229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:58:26.654463 containerd[1579]: time="2025-01-29T11:58:26.654351439Z" level=info msg="CreateContainer within sandbox \"c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:58:26.686022 containerd[1579]: time="2025-01-29T11:58:26.685842598Z" level=info msg="CreateContainer within sandbox \"c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"00dbbfd60bfdec24775c36eb917c5c01a65a73dd2d98ec05b00cad0c28731308\"" Jan 29 11:58:26.686490 containerd[1579]: time="2025-01-29T11:58:26.686446141Z" level=info msg="StartContainer for \"00dbbfd60bfdec24775c36eb917c5c01a65a73dd2d98ec05b00cad0c28731308\"" Jan 29 11:58:26.779671 containerd[1579]: time="2025-01-29T11:58:26.779618280Z" level=info msg="StartContainer for \"00dbbfd60bfdec24775c36eb917c5c01a65a73dd2d98ec05b00cad0c28731308\" returns successfully" Jan 29 11:58:27.253398 kubelet[2744]: I0129 11:58:27.252337 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c7dbf75df-clqxh" podStartSLOduration=25.575350924 podStartE2EDuration="30.252318824s" podCreationTimestamp="2025-01-29 11:57:57 +0000 UTC" firstStartedPulling="2025-01-29 11:58:21.462834536 +0000 UTC m=+45.792481819" lastFinishedPulling="2025-01-29 11:58:26.139802426 +0000 UTC m=+50.469449719" observedRunningTime="2025-01-29 11:58:27.25191202 +0000 UTC m=+51.581559303" watchObservedRunningTime="2025-01-29 11:58:27.252318824 +0000 UTC m=+51.581966107" Jan 29 11:58:27.310761 kubelet[2744]: I0129 11:58:27.310698 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77c5f74b87-wnvvw" podStartSLOduration=27.901756709 podStartE2EDuration="31.310676974s" podCreationTimestamp="2025-01-29 11:57:56 +0000 UTC" firstStartedPulling="2025-01-29 11:58:23.24278492 +0000 UTC m=+47.572432203" lastFinishedPulling="2025-01-29 11:58:26.651705175 +0000 UTC m=+50.981352468" observedRunningTime="2025-01-29 11:58:27.274712043 +0000 UTC m=+51.604359356" watchObservedRunningTime="2025-01-29 11:58:27.310676974 +0000 UTC m=+51.640324257" Jan 29 11:58:28.120650 containerd[1579]: time="2025-01-29T11:58:28.120541368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:28.121434 containerd[1579]: time="2025-01-29T11:58:28.121381505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:58:28.122736 containerd[1579]: time="2025-01-29T11:58:28.122690149Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:28.124973 containerd[1579]: time="2025-01-29T11:58:28.124928318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:28.125546 containerd[1579]: time="2025-01-29T11:58:28.125386727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.473438937s" Jan 29 11:58:28.125546 containerd[1579]: time="2025-01-29T11:58:28.125419088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:58:28.127621 containerd[1579]: time="2025-01-29T11:58:28.127578639Z" level=info msg="CreateContainer within sandbox \"6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:58:28.145671 containerd[1579]: time="2025-01-29T11:58:28.145620025Z" level=info msg="CreateContainer within sandbox \"6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1a0aca4afc0dce0452f2a613a7dcbd825a40dcb37a2020b8c2a2807025b74451\"" Jan 29 11:58:28.146228 containerd[1579]: time="2025-01-29T11:58:28.146194282Z" level=info msg="StartContainer for \"1a0aca4afc0dce0452f2a613a7dcbd825a40dcb37a2020b8c2a2807025b74451\"" Jan 29 11:58:28.212256 containerd[1579]: time="2025-01-29T11:58:28.212219462Z" level=info msg="StartContainer for \"1a0aca4afc0dce0452f2a613a7dcbd825a40dcb37a2020b8c2a2807025b74451\" returns successfully" Jan 29 11:58:28.213969 containerd[1579]: time="2025-01-29T11:58:28.213934017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:58:28.255498 kubelet[2744]: I0129 11:58:28.255433 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:58:29.565351 containerd[1579]: time="2025-01-29T11:58:29.565295344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:29.566112 containerd[1579]: time="2025-01-29T11:58:29.566079075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:58:29.567234 containerd[1579]: time="2025-01-29T11:58:29.567205398Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:29.569403 containerd[1579]: time="2025-01-29T11:58:29.569372813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:58:29.570020 containerd[1579]: time="2025-01-29T11:58:29.569999128Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.356019865s" Jan 29 11:58:29.570080 containerd[1579]: time="2025-01-29T11:58:29.570024856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:58:29.571716 containerd[1579]: time="2025-01-29T11:58:29.571685892Z" level=info msg="CreateContainer within sandbox \"6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:58:29.586747 containerd[1579]: time="2025-01-29T11:58:29.586698815Z" level=info msg="CreateContainer within sandbox \"6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"def5a01b296e5eade783327b3f382c5ddc15947d7b4df38703d970eabc7daaf4\"" Jan 29 11:58:29.587734 containerd[1579]: time="2025-01-29T11:58:29.587370726Z" level=info msg="StartContainer for \"def5a01b296e5eade783327b3f382c5ddc15947d7b4df38703d970eabc7daaf4\"" Jan 29 11:58:29.673501 containerd[1579]: time="2025-01-29T11:58:29.673457183Z" level=info msg="StartContainer for \"def5a01b296e5eade783327b3f382c5ddc15947d7b4df38703d970eabc7daaf4\" returns successfully" Jan 29 11:58:29.808004 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:34712.service - OpenSSH per-connection server daemon (10.0.0.1:34712). Jan 29 11:58:29.850096 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 34712 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:29.852096 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:29.856960 systemd-logind[1565]: New session 14 of user core. Jan 29 11:58:29.868132 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:58:29.995229 sshd[5296]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:29.999700 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:34712.service: Deactivated successfully. Jan 29 11:58:30.002755 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:58:30.002839 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:58:30.004178 systemd-logind[1565]: Removed session 14. Jan 29 11:58:30.274030 kubelet[2744]: I0129 11:58:30.273044 2744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dmc9g" podStartSLOduration=28.825989468 podStartE2EDuration="33.273028586s" podCreationTimestamp="2025-01-29 11:57:57 +0000 UTC" firstStartedPulling="2025-01-29 11:58:25.123654944 +0000 UTC m=+49.453302227" lastFinishedPulling="2025-01-29 11:58:29.570694062 +0000 UTC m=+53.900341345" observedRunningTime="2025-01-29 11:58:30.272867434 +0000 UTC m=+54.602514717" watchObservedRunningTime="2025-01-29 11:58:30.273028586 +0000 UTC m=+54.602675869" Jan 29 11:58:30.481724 kubelet[2744]: I0129 11:58:30.481691 2744 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:58:30.481724 kubelet[2744]: I0129 11:58:30.481725 2744 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:58:35.005925 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:44712.service - OpenSSH per-connection server daemon (10.0.0.1:44712). Jan 29 11:58:35.040041 sshd[5315]: Accepted publickey for core from 10.0.0.1 port 44712 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:35.042101 sshd[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:35.046231 systemd-logind[1565]: New session 15 of user core. Jan 29 11:58:35.056022 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:58:35.170068 sshd[5315]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:35.174275 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:44712.service: Deactivated successfully. Jan 29 11:58:35.176496 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:58:35.176594 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:58:35.177938 systemd-logind[1565]: Removed session 15. Jan 29 11:58:35.765192 containerd[1579]: time="2025-01-29T11:58:35.765150642Z" level=info msg="StopPodSandbox for \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\"" Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.801 [WARNING][5352] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0", GenerateName:"calico-apiserver-77c5f74b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"26202b2c-e1f1-4083-9026-183a5e92161f", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5f74b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc", Pod:"calico-apiserver-77c5f74b87-wnvvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif43a043a0f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.801 [INFO][5352] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.801 [INFO][5352] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" iface="eth0" netns="" Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.801 [INFO][5352] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.801 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.820 [INFO][5363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" HandleID="k8s-pod-network.7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.820 [INFO][5363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.820 [INFO][5363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.825 [WARNING][5363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" HandleID="k8s-pod-network.7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.825 [INFO][5363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" HandleID="k8s-pod-network.7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.826 [INFO][5363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:35.830850 containerd[1579]: 2025-01-29 11:58:35.828 [INFO][5352] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:35.831494 containerd[1579]: time="2025-01-29T11:58:35.830885007Z" level=info msg="TearDown network for sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\" successfully" Jan 29 11:58:35.831494 containerd[1579]: time="2025-01-29T11:58:35.830911098Z" level=info msg="StopPodSandbox for \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\" returns successfully" Jan 29 11:58:35.831494 containerd[1579]: time="2025-01-29T11:58:35.831486674Z" level=info msg="RemovePodSandbox for \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\"" Jan 29 11:58:35.833875 containerd[1579]: time="2025-01-29T11:58:35.833828976Z" level=info msg="Forcibly stopping sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\"" Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.865 [WARNING][5386] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0", GenerateName:"calico-apiserver-77c5f74b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"26202b2c-e1f1-4083-9026-183a5e92161f", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5f74b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c46f949586976ec9df49979c1dd8b296b80b59777dc7c4cc96a71c698a7774cc", Pod:"calico-apiserver-77c5f74b87-wnvvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif43a043a0f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.865 [INFO][5386] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.865 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" iface="eth0" netns="" Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.865 [INFO][5386] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.865 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.884 [INFO][5393] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" HandleID="k8s-pod-network.7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.884 [INFO][5393] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.884 [INFO][5393] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.889 [WARNING][5393] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" HandleID="k8s-pod-network.7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.889 [INFO][5393] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" HandleID="k8s-pod-network.7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Workload="localhost-k8s-calico--apiserver--77c5f74b87--wnvvw-eth0" Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.890 [INFO][5393] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:35.894789 containerd[1579]: 2025-01-29 11:58:35.892 [INFO][5386] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428" Jan 29 11:58:35.895212 containerd[1579]: time="2025-01-29T11:58:35.894852767Z" level=info msg="TearDown network for sandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\" successfully" Jan 29 11:58:35.903208 containerd[1579]: time="2025-01-29T11:58:35.903171838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:58:35.903274 containerd[1579]: time="2025-01-29T11:58:35.903239027Z" level=info msg="RemovePodSandbox \"7b99f37d7bd97963db625eea920b5996bb495ba7e2595ca425db2737cc6a0428\" returns successfully" Jan 29 11:58:35.903845 containerd[1579]: time="2025-01-29T11:58:35.903821106Z" level=info msg="StopPodSandbox for \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\"" Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.935 [WARNING][5415] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0", GenerateName:"calico-apiserver-77c5f74b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"442c7a1a-1bf4-4799-9255-bae8a191ac48", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5f74b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22", Pod:"calico-apiserver-77c5f74b87-vk8s6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliafb557aa3fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.935 [INFO][5415] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.935 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" iface="eth0" netns="" Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.935 [INFO][5415] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.935 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.954 [INFO][5422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" HandleID="k8s-pod-network.e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.955 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.955 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.959 [WARNING][5422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" HandleID="k8s-pod-network.e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.959 [INFO][5422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" HandleID="k8s-pod-network.e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.960 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:35.964863 containerd[1579]: 2025-01-29 11:58:35.962 [INFO][5415] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:35.965290 containerd[1579]: time="2025-01-29T11:58:35.964919541Z" level=info msg="TearDown network for sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\" successfully" Jan 29 11:58:35.965290 containerd[1579]: time="2025-01-29T11:58:35.964951852Z" level=info msg="StopPodSandbox for \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\" returns successfully" Jan 29 11:58:35.965487 containerd[1579]: time="2025-01-29T11:58:35.965461612Z" level=info msg="RemovePodSandbox for \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\"" Jan 29 11:58:35.965522 containerd[1579]: time="2025-01-29T11:58:35.965496690Z" level=info msg="Forcibly stopping sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\"" Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:35.996 [WARNING][5444] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0", GenerateName:"calico-apiserver-77c5f74b87-", Namespace:"calico-apiserver", SelfLink:"", UID:"442c7a1a-1bf4-4799-9255-bae8a191ac48", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c5f74b87", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ab1a45b35459ad0496983d9624b19e49c1dc228f814e59372f699f06ea0ea22", Pod:"calico-apiserver-77c5f74b87-vk8s6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliafb557aa3fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:35.996 [INFO][5444] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:35.996 [INFO][5444] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" iface="eth0" netns="" Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:35.996 [INFO][5444] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:35.996 [INFO][5444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:36.014 [INFO][5451] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" HandleID="k8s-pod-network.e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:36.014 [INFO][5451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:36.014 [INFO][5451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:36.018 [WARNING][5451] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" HandleID="k8s-pod-network.e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:36.018 [INFO][5451] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" HandleID="k8s-pod-network.e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Workload="localhost-k8s-calico--apiserver--77c5f74b87--vk8s6-eth0" Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:36.020 [INFO][5451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:36.024284 containerd[1579]: 2025-01-29 11:58:36.022 [INFO][5444] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46" Jan 29 11:58:36.026302 containerd[1579]: time="2025-01-29T11:58:36.024764344Z" level=info msg="TearDown network for sandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\" successfully" Jan 29 11:58:36.028588 containerd[1579]: time="2025-01-29T11:58:36.028561649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:58:36.028701 containerd[1579]: time="2025-01-29T11:58:36.028648967Z" level=info msg="RemovePodSandbox \"e6788c95e6d525224ebddaf02653c6eac3ea869b68828a60e1876991d274cf46\" returns successfully" Jan 29 11:58:36.029118 containerd[1579]: time="2025-01-29T11:58:36.029082159Z" level=info msg="StopPodSandbox for \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\"" Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.057 [WARNING][5473] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0", GenerateName:"calico-kube-controllers-7c7dbf75df-", Namespace:"calico-system", SelfLink:"", UID:"eaeb6ba2-4292-4dd6-9e36-a5452f96f08f", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c7dbf75df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b", Pod:"calico-kube-controllers-7c7dbf75df-clqxh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicce6e605487", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.058 [INFO][5473] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.058 [INFO][5473] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" iface="eth0" netns="" Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.058 [INFO][5473] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.058 [INFO][5473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.076 [INFO][5480] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" HandleID="k8s-pod-network.dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.076 [INFO][5480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.076 [INFO][5480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.081 [WARNING][5480] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" HandleID="k8s-pod-network.dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.081 [INFO][5480] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" HandleID="k8s-pod-network.dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.082 [INFO][5480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:36.086586 containerd[1579]: 2025-01-29 11:58:36.084 [INFO][5473] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:36.086994 containerd[1579]: time="2025-01-29T11:58:36.086648660Z" level=info msg="TearDown network for sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\" successfully" Jan 29 11:58:36.086994 containerd[1579]: time="2025-01-29T11:58:36.086676954Z" level=info msg="StopPodSandbox for \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\" returns successfully" Jan 29 11:58:36.087192 containerd[1579]: time="2025-01-29T11:58:36.087170432Z" level=info msg="RemovePodSandbox for \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\"" Jan 29 11:58:36.087247 containerd[1579]: time="2025-01-29T11:58:36.087197093Z" level=info msg="Forcibly stopping sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\"" Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.116 [WARNING][5502] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0", GenerateName:"calico-kube-controllers-7c7dbf75df-", Namespace:"calico-system", SelfLink:"", UID:"eaeb6ba2-4292-4dd6-9e36-a5452f96f08f", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c7dbf75df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8ba7fdf367cda23eda0a7c26a739f08efbc51395c9653f7941dc2f246ceb1e8b", Pod:"calico-kube-controllers-7c7dbf75df-clqxh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicce6e605487", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.116 [INFO][5502] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.116 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" iface="eth0" netns="" Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.117 [INFO][5502] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.117 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.134 [INFO][5509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" HandleID="k8s-pod-network.dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.134 [INFO][5509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.134 [INFO][5509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.138 [WARNING][5509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" HandleID="k8s-pod-network.dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.138 [INFO][5509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" HandleID="k8s-pod-network.dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Workload="localhost-k8s-calico--kube--controllers--7c7dbf75df--clqxh-eth0" Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.139 [INFO][5509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:36.143884 containerd[1579]: 2025-01-29 11:58:36.141 [INFO][5502] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816" Jan 29 11:58:36.144305 containerd[1579]: time="2025-01-29T11:58:36.143925314Z" level=info msg="TearDown network for sandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\" successfully" Jan 29 11:58:36.153680 containerd[1579]: time="2025-01-29T11:58:36.153638122Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:58:36.153745 containerd[1579]: time="2025-01-29T11:58:36.153687957Z" level=info msg="RemovePodSandbox \"dc8c78eeea1f8b5fba1f2c7d1c423cf55d290e31de5c01c97f33dc1f8c35d816\" returns successfully" Jan 29 11:58:36.154165 containerd[1579]: time="2025-01-29T11:58:36.154134565Z" level=info msg="StopPodSandbox for \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\"" Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.183 [WARNING][5531] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dmc9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b8569b7-17f3-41f5-af84-56efb8c2c37a", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30", Pod:"csi-node-driver-dmc9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf92ea63854", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.183 [INFO][5531] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.183 [INFO][5531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" iface="eth0" netns="" Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.183 [INFO][5531] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.183 [INFO][5531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.201 [INFO][5538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" HandleID="k8s-pod-network.93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.201 [INFO][5538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.201 [INFO][5538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.205 [WARNING][5538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" HandleID="k8s-pod-network.93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.205 [INFO][5538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" HandleID="k8s-pod-network.93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.206 [INFO][5538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:36.210855 containerd[1579]: 2025-01-29 11:58:36.208 [INFO][5531] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:36.211387 containerd[1579]: time="2025-01-29T11:58:36.210902702Z" level=info msg="TearDown network for sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\" successfully" Jan 29 11:58:36.211387 containerd[1579]: time="2025-01-29T11:58:36.210934083Z" level=info msg="StopPodSandbox for \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\" returns successfully" Jan 29 11:58:36.211462 containerd[1579]: time="2025-01-29T11:58:36.211422220Z" level=info msg="RemovePodSandbox for \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\"" Jan 29 11:58:36.211462 containerd[1579]: time="2025-01-29T11:58:36.211452068Z" level=info msg="Forcibly stopping sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\"" Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.248 [WARNING][5560] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dmc9g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b8569b7-17f3-41f5-af84-56efb8c2c37a", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6116e907f46d4bdb9d41ac593ac31d6be8bca46896a2ed166948b6e5ff795b30", Pod:"csi-node-driver-dmc9g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf92ea63854", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.248 [INFO][5560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.248 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" iface="eth0" netns="" Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.248 [INFO][5560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.248 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.266 [INFO][5567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" HandleID="k8s-pod-network.93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.266 [INFO][5567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.266 [INFO][5567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.272 [WARNING][5567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" HandleID="k8s-pod-network.93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.272 [INFO][5567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" HandleID="k8s-pod-network.93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Workload="localhost-k8s-csi--node--driver--dmc9g-eth0" Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.273 [INFO][5567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:36.277860 containerd[1579]: 2025-01-29 11:58:36.275 [INFO][5560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a" Jan 29 11:58:36.277860 containerd[1579]: time="2025-01-29T11:58:36.277811709Z" level=info msg="TearDown network for sandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\" successfully" Jan 29 11:58:36.281631 containerd[1579]: time="2025-01-29T11:58:36.281597602Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:58:36.281680 containerd[1579]: time="2025-01-29T11:58:36.281645093Z" level=info msg="RemovePodSandbox \"93da5fcd7c1f6e5632654c650711ee4ef58a05a511aeeff4f9272ef0a7d3eb4a\" returns successfully" Jan 29 11:58:36.282222 containerd[1579]: time="2025-01-29T11:58:36.281953615Z" level=info msg="StopPodSandbox for \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\"" Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.311 [WARNING][5589] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"562f7dc1-fccc-4836-832d-33f596ec71b8", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa", Pod:"coredns-7db6d8ff4d-29qqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b8a96d4ff8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.311 [INFO][5589] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.311 [INFO][5589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" iface="eth0" netns="" Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.311 [INFO][5589] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.311 [INFO][5589] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.328 [INFO][5597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" HandleID="k8s-pod-network.4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.328 [INFO][5597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.328 [INFO][5597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.333 [WARNING][5597] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" HandleID="k8s-pod-network.4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.333 [INFO][5597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" HandleID="k8s-pod-network.4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.334 [INFO][5597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:36.338735 containerd[1579]: 2025-01-29 11:58:36.336 [INFO][5589] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:36.339183 containerd[1579]: time="2025-01-29T11:58:36.338785345Z" level=info msg="TearDown network for sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\" successfully" Jan 29 11:58:36.339183 containerd[1579]: time="2025-01-29T11:58:36.338812226Z" level=info msg="StopPodSandbox for \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\" returns successfully" Jan 29 11:58:36.339331 containerd[1579]: time="2025-01-29T11:58:36.339302298Z" level=info msg="RemovePodSandbox for \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\"" Jan 29 11:58:36.339365 containerd[1579]: time="2025-01-29T11:58:36.339335241Z" level=info msg="Forcibly stopping sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\"" Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.367 [WARNING][5620] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"562f7dc1-fccc-4836-832d-33f596ec71b8", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5df0942461fdda2f3dc370a91340719580e5c35329499f5ee2acccb39aaf2aa", Pod:"coredns-7db6d8ff4d-29qqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b8a96d4ff8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.367 [INFO][5620] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.367 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" iface="eth0" netns="" Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.367 [INFO][5620] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.367 [INFO][5620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.384 [INFO][5628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" HandleID="k8s-pod-network.4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.384 [INFO][5628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.384 [INFO][5628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.388 [WARNING][5628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" HandleID="k8s-pod-network.4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.388 [INFO][5628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" HandleID="k8s-pod-network.4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Workload="localhost-k8s-coredns--7db6d8ff4d--29qqz-eth0" Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.390 [INFO][5628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:36.394193 containerd[1579]: 2025-01-29 11:58:36.392 [INFO][5620] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf" Jan 29 11:58:36.394594 containerd[1579]: time="2025-01-29T11:58:36.394238066Z" level=info msg="TearDown network for sandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\" successfully" Jan 29 11:58:36.398005 containerd[1579]: time="2025-01-29T11:58:36.397955285Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:58:36.398042 containerd[1579]: time="2025-01-29T11:58:36.398009440Z" level=info msg="RemovePodSandbox \"4fe8cd6c531ece4b2fd9e885408a0115323c8fa7de49d69ff773e0faee75a2cf\" returns successfully" Jan 29 11:58:36.398447 containerd[1579]: time="2025-01-29T11:58:36.398425228Z" level=info msg="StopPodSandbox for \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\"" Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.429 [WARNING][5651] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2baaae63-eb27-4f62-99d9-91a996a907b5", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5", Pod:"coredns-7db6d8ff4d-sp9qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47621202af8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.429 [INFO][5651] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.429 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" iface="eth0" netns="" Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.429 [INFO][5651] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.429 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.448 [INFO][5659] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" HandleID="k8s-pod-network.279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.448 [INFO][5659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.448 [INFO][5659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.453 [WARNING][5659] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" HandleID="k8s-pod-network.279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.453 [INFO][5659] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" HandleID="k8s-pod-network.279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.454 [INFO][5659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:36.459061 containerd[1579]: 2025-01-29 11:58:36.456 [INFO][5651] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:36.459595 containerd[1579]: time="2025-01-29T11:58:36.459102405Z" level=info msg="TearDown network for sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\" successfully" Jan 29 11:58:36.459595 containerd[1579]: time="2025-01-29T11:58:36.459133734Z" level=info msg="StopPodSandbox for \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\" returns successfully" Jan 29 11:58:36.459595 containerd[1579]: time="2025-01-29T11:58:36.459595070Z" level=info msg="RemovePodSandbox for \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\"" Jan 29 11:58:36.459595 containerd[1579]: time="2025-01-29T11:58:36.459655827Z" level=info msg="Forcibly stopping sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\"" Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.490 [WARNING][5681] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2baaae63-eb27-4f62-99d9-91a996a907b5", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83fd9f4deb7303f611b0d83bd9c2a208b36fd8ed0322365db2070bfbda0926c5", Pod:"coredns-7db6d8ff4d-sp9qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali47621202af8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.490 [INFO][5681] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.490 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" iface="eth0" netns="" Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.490 [INFO][5681] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.490 [INFO][5681] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.508 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" HandleID="k8s-pod-network.279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.508 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.508 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.513 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" HandleID="k8s-pod-network.279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.513 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" HandleID="k8s-pod-network.279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Workload="localhost-k8s-coredns--7db6d8ff4d--sp9qb-eth0" Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.514 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:58:36.518451 containerd[1579]: 2025-01-29 11:58:36.516 [INFO][5681] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd" Jan 29 11:58:36.519317 containerd[1579]: time="2025-01-29T11:58:36.518512827Z" level=info msg="TearDown network for sandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\" successfully" Jan 29 11:58:36.522977 containerd[1579]: time="2025-01-29T11:58:36.522949039Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:58:36.523037 containerd[1579]: time="2025-01-29T11:58:36.522998504Z" level=info msg="RemovePodSandbox \"279ce896a3e6dac9b8650eb47f4781a628601ee1c9d818a6a8239dc79a59a1fd\" returns successfully" Jan 29 11:58:40.189997 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:44716.service - OpenSSH per-connection server daemon (10.0.0.1:44716). Jan 29 11:58:40.228816 sshd[5718]: Accepted publickey for core from 10.0.0.1 port 44716 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:40.230838 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:40.235667 systemd-logind[1565]: New session 16 of user core. Jan 29 11:58:40.241878 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:58:40.376130 sshd[5718]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:40.382532 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:44716.service: Deactivated successfully. Jan 29 11:58:40.385786 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:58:40.386470 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:58:40.387358 systemd-logind[1565]: Removed session 16. Jan 29 11:58:45.386032 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:36412.service - OpenSSH per-connection server daemon (10.0.0.1:36412). Jan 29 11:58:45.420222 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 36412 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:45.422061 sshd[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:45.426572 systemd-logind[1565]: New session 17 of user core. Jan 29 11:58:45.437983 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:58:45.545930 sshd[5735]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:45.551816 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:36420.service - OpenSSH per-connection server daemon (10.0.0.1:36420). Jan 29 11:58:45.552451 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:36412.service: Deactivated successfully. Jan 29 11:58:45.555075 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:58:45.556197 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:58:45.557646 systemd-logind[1565]: Removed session 17. Jan 29 11:58:45.584878 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 36420 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:45.586724 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:45.590688 systemd-logind[1565]: New session 18 of user core. Jan 29 11:58:45.599893 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:58:45.868561 sshd[5747]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:45.875881 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:36430.service - OpenSSH per-connection server daemon (10.0.0.1:36430). Jan 29 11:58:45.876557 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:36420.service: Deactivated successfully. Jan 29 11:58:45.878674 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:58:45.880401 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:58:45.881761 systemd-logind[1565]: Removed session 18. Jan 29 11:58:45.910835 sshd[5760]: Accepted publickey for core from 10.0.0.1 port 36430 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:45.912444 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:45.916802 systemd-logind[1565]: New session 19 of user core. Jan 29 11:58:45.926878 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:58:47.415128 sshd[5760]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:47.428987 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:36432.service - OpenSSH per-connection server daemon (10.0.0.1:36432). Jan 29 11:58:47.429499 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:36430.service: Deactivated successfully. Jan 29 11:58:47.433228 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:58:47.436727 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:58:47.440732 systemd-logind[1565]: Removed session 19. Jan 29 11:58:47.472907 sshd[5782]: Accepted publickey for core from 10.0.0.1 port 36432 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:47.474686 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:47.479166 systemd-logind[1565]: New session 20 of user core. Jan 29 11:58:47.485871 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:58:47.491259 kubelet[2744]: I0129 11:58:47.491212 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:58:47.716921 sshd[5782]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:47.730088 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:36436.service - OpenSSH per-connection server daemon (10.0.0.1:36436). Jan 29 11:58:47.730983 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:36432.service: Deactivated successfully. Jan 29 11:58:47.733558 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:58:47.735395 systemd-logind[1565]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:58:47.736378 systemd-logind[1565]: Removed session 20. Jan 29 11:58:47.762294 sshd[5797]: Accepted publickey for core from 10.0.0.1 port 36436 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:47.764019 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:47.768176 systemd-logind[1565]: New session 21 of user core. Jan 29 11:58:47.777899 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:58:47.890058 sshd[5797]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:47.893876 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:36436.service: Deactivated successfully. Jan 29 11:58:47.896416 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:58:47.897386 systemd-logind[1565]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:58:47.898449 systemd-logind[1565]: Removed session 21. Jan 29 11:58:48.565815 kubelet[2744]: I0129 11:58:48.565756 2744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:58:52.909977 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:46768.service - OpenSSH per-connection server daemon (10.0.0.1:46768). Jan 29 11:58:52.944706 sshd[5845]: Accepted publickey for core from 10.0.0.1 port 46768 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:52.946467 sshd[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:52.951375 systemd-logind[1565]: New session 22 of user core. Jan 29 11:58:52.956004 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:58:53.074907 sshd[5845]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:53.078834 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:46768.service: Deactivated successfully. Jan 29 11:58:53.081411 systemd-logind[1565]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:58:53.082413 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:58:53.083826 systemd-logind[1565]: Removed session 22. Jan 29 11:58:58.084900 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:46770.service - OpenSSH per-connection server daemon (10.0.0.1:46770). Jan 29 11:58:58.116035 sshd[5886]: Accepted publickey for core from 10.0.0.1 port 46770 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:58:58.117547 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:58:58.121395 systemd-logind[1565]: New session 23 of user core. Jan 29 11:58:58.132882 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:58:58.241265 sshd[5886]: pam_unix(sshd:session): session closed for user core Jan 29 11:58:58.245715 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:46770.service: Deactivated successfully. Jan 29 11:58:58.247987 systemd-logind[1565]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:58:58.248081 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:58:58.249035 systemd-logind[1565]: Removed session 23. Jan 29 11:58:58.773927 kubelet[2744]: E0129 11:58:58.773871 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:02.774591 kubelet[2744]: E0129 11:59:02.774525 2744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:59:03.251876 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:43032.service - OpenSSH per-connection server daemon (10.0.0.1:43032). Jan 29 11:59:03.285137 sshd[5901]: Accepted publickey for core from 10.0.0.1 port 43032 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 11:59:03.286784 sshd[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:59:03.290689 systemd-logind[1565]: New session 24 of user core. Jan 29 11:59:03.298956 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:59:03.417475 sshd[5901]: pam_unix(sshd:session): session closed for user core Jan 29 11:59:03.422560 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:43032.service: Deactivated successfully. Jan 29 11:59:03.425756 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:59:03.426561 systemd-logind[1565]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:59:03.427442 systemd-logind[1565]: Removed session 24.