Jan 13 20:44:36.905045 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 20:44:36.905071 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:44:36.905085 kernel: BIOS-provided physical RAM map: Jan 13 20:44:36.905093 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:44:36.905101 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:44:36.905109 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:44:36.905119 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 20:44:36.905127 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 20:44:36.905136 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 20:44:36.905147 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 20:44:36.905155 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:44:36.905163 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:44:36.905171 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 20:44:36.905180 kernel: NX (Execute Disable) protection: active Jan 13 20:44:36.905190 kernel: APIC: Static calls initialized Jan 13 20:44:36.905202 kernel: SMBIOS 2.8 present. Jan 13 20:44:36.905211 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 20:44:36.905220 kernel: Hypervisor detected: KVM Jan 13 20:44:36.905228 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:44:36.905237 kernel: kvm-clock: using sched offset of 2275370643 cycles Jan 13 20:44:36.905246 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:44:36.905256 kernel: tsc: Detected 2794.748 MHz processor Jan 13 20:44:36.905265 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:44:36.905275 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:44:36.905284 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 20:44:36.905296 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:44:36.905305 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:44:36.905315 kernel: Using GB pages for direct mapping Jan 13 20:44:36.905324 kernel: ACPI: Early table checksum verification disabled Jan 13 20:44:36.905333 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 20:44:36.905342 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:36.905351 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:36.905363 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:36.905375 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 20:44:36.905395 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:36.905404 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:36.905413 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:36.905423 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:44:36.905432 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 20:44:36.905441 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 20:44:36.905456 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 20:44:36.905471 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 20:44:36.905482 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 20:44:36.905495 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 20:44:36.905507 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 20:44:36.905518 kernel: No NUMA configuration found Jan 13 20:44:36.905531 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 20:44:36.905542 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 20:44:36.905557 kernel: Zone ranges: Jan 13 20:44:36.905569 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:44:36.905582 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 20:44:36.905594 kernel: Normal empty Jan 13 20:44:36.905606 kernel: Movable zone start for each node Jan 13 20:44:36.905618 kernel: Early memory node ranges Jan 13 20:44:36.905630 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:44:36.905642 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 20:44:36.905654 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 20:44:36.905669 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:44:36.905681 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:44:36.905691 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 20:44:36.905700 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:44:36.905710 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:44:36.905728 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:44:36.905738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:44:36.905748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:44:36.905760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:44:36.905772 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:44:36.905782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:44:36.905792 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:44:36.905801 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:44:36.905811 kernel: TSC deadline timer available Jan 13 20:44:36.905821 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 20:44:36.905830 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:44:36.905840 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 20:44:36.905850 kernel: kvm-guest: setup PV sched yield Jan 13 20:44:36.905859 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 20:44:36.905871 kernel: Booting paravirtualized kernel on KVM Jan 13 20:44:36.905881 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:44:36.905891 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 20:44:36.905901 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 20:44:36.905911 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 20:44:36.905920 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 20:44:36.905929 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:44:36.905939 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:44:36.905950 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:44:36.905963 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:44:36.905972 kernel: random: crng init done Jan 13 20:44:36.905982 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:44:36.905991 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:44:36.906001 kernel: Fallback order for Node 0: 0 Jan 13 20:44:36.906010 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 20:44:36.906030 kernel: Policy zone: DMA32 Jan 13 20:44:36.906040 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:44:36.906053 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 136900K reserved, 0K cma-reserved) Jan 13 20:44:36.906070 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:44:36.906087 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 20:44:36.906104 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:44:36.906121 kernel: Dynamic Preempt: voluntary Jan 13 20:44:36.906137 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:44:36.906155 kernel: rcu: RCU event tracing is enabled. Jan 13 20:44:36.906173 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:44:36.906195 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:44:36.906216 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:44:36.906226 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:44:36.906249 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:44:36.906260 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:44:36.906270 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 20:44:36.906280 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:44:36.906291 kernel: Console: colour VGA+ 80x25 Jan 13 20:44:36.906301 kernel: printk: console [ttyS0] enabled Jan 13 20:44:36.906311 kernel: ACPI: Core revision 20230628 Jan 13 20:44:36.906322 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 20:44:36.906336 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:44:36.906346 kernel: x2apic enabled Jan 13 20:44:36.906356 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:44:36.906367 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 20:44:36.906387 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 20:44:36.906399 kernel: kvm-guest: setup PV IPIs Jan 13 20:44:36.906421 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 20:44:36.906432 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 20:44:36.906442 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 20:44:36.906453 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:44:36.906464 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 20:44:36.906477 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 20:44:36.906488 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:44:36.906499 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:44:36.906510 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:44:36.906520 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:44:36.906534 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 20:44:36.906544 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 20:44:36.906555 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:44:36.906566 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:44:36.906577 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 20:44:36.906589 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 20:44:36.906600 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 20:44:36.906610 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:44:36.906624 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:44:36.906635 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:44:36.906646 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:44:36.906656 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 20:44:36.906667 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:44:36.906678 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:44:36.906689 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:44:36.906699 kernel: landlock: Up and running. Jan 13 20:44:36.906710 kernel: SELinux: Initializing. Jan 13 20:44:36.906733 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:44:36.906744 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:44:36.906755 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 20:44:36.906766 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:44:36.906777 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:44:36.906788 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:44:36.906798 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 20:44:36.906809 kernel: ... version: 0 Jan 13 20:44:36.906820 kernel: ... bit width: 48 Jan 13 20:44:36.906833 kernel: ... generic registers: 6 Jan 13 20:44:36.906844 kernel: ... value mask: 0000ffffffffffff Jan 13 20:44:36.906855 kernel: ... max period: 00007fffffffffff Jan 13 20:44:36.906865 kernel: ... fixed-purpose events: 0 Jan 13 20:44:36.906876 kernel: ... event mask: 000000000000003f Jan 13 20:44:36.906886 kernel: signal: max sigframe size: 1776 Jan 13 20:44:36.906897 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:44:36.906908 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:44:36.906919 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:44:36.906932 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:44:36.906943 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 20:44:36.906953 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:44:36.906964 kernel: smpboot: Max logical packages: 1 Jan 13 20:44:36.906975 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 20:44:36.906986 kernel: devtmpfs: initialized Jan 13 20:44:36.906996 kernel: x86/mm: Memory block size: 128MB Jan 13 20:44:36.907010 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:44:36.907021 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:44:36.907034 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:44:36.907045 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:44:36.907056 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:44:36.907066 kernel: audit: type=2000 audit(1736801076.095:1): state=initialized audit_enabled=0 res=1 Jan 13 20:44:36.907077 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:44:36.907088 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:44:36.907099 kernel: cpuidle: using governor menu Jan 13 20:44:36.907109 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:44:36.907120 kernel: dca service started, version 1.12.1 Jan 13 20:44:36.907133 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 20:44:36.907144 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 20:44:36.907155 kernel: PCI: Using configuration type 1 for base access Jan 13 20:44:36.907166 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:44:36.907177 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:44:36.907188 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:44:36.907198 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:44:36.907209 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:44:36.907222 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:44:36.907235 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:44:36.907246 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:44:36.907257 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:44:36.907268 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:44:36.907279 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:44:36.907289 kernel: ACPI: Interpreter enabled Jan 13 20:44:36.907300 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:44:36.907310 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:44:36.907321 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:44:36.907335 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:44:36.907346 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:44:36.907356 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:44:36.907618 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:44:36.907789 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 20:44:36.907976 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 20:44:36.907998 kernel: PCI host bridge to bus 0000:00 Jan 13 20:44:36.908176 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:44:36.908397 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:44:36.908582 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:44:36.908760 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 20:44:36.908908 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:44:36.909047 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 20:44:36.909187 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:44:36.909371 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:44:36.909573 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 20:44:36.909736 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 20:44:36.909890 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 20:44:36.910078 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 20:44:36.910269 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:44:36.910464 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:44:36.910632 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 20:44:36.910795 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 20:44:36.910947 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 20:44:36.911112 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:44:36.911289 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:44:36.911483 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 20:44:36.911675 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 20:44:36.911867 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:44:36.912020 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 20:44:36.912178 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 20:44:36.912329 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 20:44:36.912504 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 20:44:36.912667 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:44:36.912838 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:44:36.913001 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:44:36.913152 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 20:44:36.913302 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 20:44:36.913489 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:44:36.913656 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 20:44:36.913672 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:44:36.913688 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:44:36.913699 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:44:36.913710 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:44:36.913731 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:44:36.913742 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:44:36.913753 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:44:36.913764 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:44:36.913775 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:44:36.913786 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:44:36.913800 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:44:36.913812 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:44:36.913822 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:44:36.913834 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:44:36.913844 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:44:36.913855 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:44:36.913866 kernel: iommu: Default domain type: Translated Jan 13 20:44:36.913878 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:44:36.913888 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:44:36.913902 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:44:36.913913 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:44:36.913924 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 20:44:36.914078 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:44:36.914230 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:44:36.914393 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:44:36.914409 kernel: vgaarb: loaded Jan 13 20:44:36.914420 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 20:44:36.914432 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 20:44:36.914447 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:44:36.914458 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:44:36.914469 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:44:36.914480 kernel: pnp: PnP ACPI init Jan 13 20:44:36.914681 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 20:44:36.914701 kernel: pnp: PnP ACPI: found 6 devices Jan 13 20:44:36.914714 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:44:36.914735 kernel: NET: Registered PF_INET protocol family Jan 13 20:44:36.914750 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:44:36.914761 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:44:36.914773 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:44:36.914784 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:44:36.914795 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:44:36.914806 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:44:36.914817 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:44:36.914828 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:44:36.914842 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:44:36.914853 kernel: NET: Registered PF_XDP protocol family Jan 13 20:44:36.914995 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:44:36.915135 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:44:36.915272 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:44:36.915432 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 20:44:36.915573 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 20:44:36.915712 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 20:44:36.915738 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:44:36.915754 kernel: Initialise system trusted keyrings Jan 13 20:44:36.915765 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:44:36.915777 kernel: Key type asymmetric registered Jan 13 20:44:36.915788 kernel: Asymmetric key parser 'x509' registered Jan 13 20:44:36.915799 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:44:36.915810 kernel: io scheduler mq-deadline registered Jan 13 20:44:36.915821 kernel: io scheduler kyber registered Jan 13 20:44:36.915832 kernel: io scheduler bfq registered Jan 13 20:44:36.915843 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:44:36.915858 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:44:36.915869 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:44:36.915881 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:44:36.915892 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:44:36.915903 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:44:36.915915 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:44:36.915926 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:44:36.915937 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:44:36.916093 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 20:44:36.916112 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:44:36.916255 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 20:44:36.916522 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T20:44:36 UTC (1736801076) Jan 13 20:44:36.916676 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 20:44:36.916691 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 20:44:36.916703 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:44:36.916714 kernel: Segment Routing with IPv6 Jan 13 20:44:36.916734 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:44:36.916750 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:44:36.916761 kernel: Key type dns_resolver registered Jan 13 20:44:36.916772 kernel: IPI shorthand broadcast: enabled Jan 13 20:44:36.916783 kernel: sched_clock: Marking stable (665003113, 195098216)->(916152152, -56050823) Jan 13 20:44:36.916794 kernel: registered taskstats version 1 Jan 13 20:44:36.916804 kernel: Loading compiled-in X.509 certificates Jan 13 20:44:36.916815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 20:44:36.916826 kernel: Key type .fscrypt registered Jan 13 20:44:36.916837 kernel: Key type fscrypt-provisioning registered Jan 13 20:44:36.916851 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:44:36.916862 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:44:36.916873 kernel: ima: No architecture policies found Jan 13 20:44:36.916883 kernel: clk: Disabling unused clocks Jan 13 20:44:36.916894 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 20:44:36.916905 kernel: Write protecting the kernel read-only data: 36864k Jan 13 20:44:36.916916 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 20:44:36.916926 kernel: Run /init as init process Jan 13 20:44:36.916937 kernel: with arguments: Jan 13 20:44:36.916951 kernel: /init Jan 13 20:44:36.916961 kernel: with environment: Jan 13 20:44:36.916972 kernel: HOME=/ Jan 13 20:44:36.916982 kernel: TERM=linux Jan 13 20:44:36.916993 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:44:36.917006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:44:36.917020 systemd[1]: Detected virtualization kvm. Jan 13 20:44:36.917035 systemd[1]: Detected architecture x86-64. Jan 13 20:44:36.917046 systemd[1]: Running in initrd. Jan 13 20:44:36.917057 systemd[1]: No hostname configured, using default hostname. Jan 13 20:44:36.917068 systemd[1]: Hostname set to . Jan 13 20:44:36.917080 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:44:36.917092 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:44:36.917104 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:44:36.917116 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:44:36.917132 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:44:36.917159 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:44:36.917174 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:44:36.917186 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:44:36.917201 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:44:36.917216 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:44:36.917228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:44:36.917240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:44:36.917252 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:44:36.917265 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:44:36.917277 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:44:36.917289 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:44:36.917300 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:44:36.917313 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:44:36.917327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:44:36.917340 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:44:36.917353 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:44:36.917365 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:44:36.917388 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:44:36.917408 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:44:36.917427 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:44:36.917440 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:44:36.917456 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:44:36.917468 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:44:36.917481 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:44:36.917493 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:44:36.917505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:36.917518 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:44:36.917531 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:44:36.917547 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:44:36.917568 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:44:36.917613 systemd-journald[193]: Collecting audit messages is disabled. Jan 13 20:44:36.917656 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:44:36.917672 systemd-journald[193]: Journal started Jan 13 20:44:36.917708 systemd-journald[193]: Runtime Journal (/run/log/journal/2c21834430704650ab20c8a4518ced98) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:44:36.908551 systemd-modules-load[195]: Inserted module 'overlay' Jan 13 20:44:36.952220 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:44:36.952248 kernel: Bridge firewalling registered Jan 13 20:44:36.938567 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 13 20:44:36.954462 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:44:36.955876 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:44:36.958398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:36.974807 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:44:36.978711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:44:36.981705 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:44:36.986753 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:44:36.998754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:44:36.999736 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:44:37.001305 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:44:37.013603 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:44:37.016453 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:44:37.021753 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:44:37.027039 dracut-cmdline[229]: dracut-dracut-053 Jan 13 20:44:37.031056 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 20:44:37.064054 systemd-resolved[237]: Positive Trust Anchors: Jan 13 20:44:37.064070 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:44:37.064101 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:44:37.066686 systemd-resolved[237]: Defaulting to hostname 'linux'. Jan 13 20:44:37.067768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:44:37.076006 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:44:37.129426 kernel: SCSI subsystem initialized Jan 13 20:44:37.139424 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:44:37.166429 kernel: iscsi: registered transport (tcp) Jan 13 20:44:37.188422 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:44:37.188512 kernel: QLogic iSCSI HBA Driver Jan 13 20:44:37.246050 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:44:37.254515 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:44:37.281406 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:44:37.281492 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:44:37.281508 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:44:37.333439 kernel: raid6: avx2x4 gen() 24968 MB/s Jan 13 20:44:37.350460 kernel: raid6: avx2x2 gen() 29953 MB/s Jan 13 20:44:37.367572 kernel: raid6: avx2x1 gen() 25088 MB/s Jan 13 20:44:37.367647 kernel: raid6: using algorithm avx2x2 gen() 29953 MB/s Jan 13 20:44:37.385648 kernel: raid6: .... xor() 18686 MB/s, rmw enabled Jan 13 20:44:37.385738 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:44:37.407414 kernel: xor: automatically using best checksumming function avx Jan 13 20:44:37.572432 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:44:37.584925 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:44:37.596538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:44:37.622414 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 13 20:44:37.627074 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:44:37.634527 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:44:37.648093 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jan 13 20:44:37.682668 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:44:37.695588 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:44:37.763434 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:44:37.771631 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:44:37.787913 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:44:37.791719 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:44:37.794739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:44:37.797678 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:44:37.802422 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 20:44:37.835020 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:44:37.835200 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:44:37.835215 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:44:37.835230 kernel: GPT:9289727 != 19775487 Jan 13 20:44:37.835244 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:44:37.835258 kernel: GPT:9289727 != 19775487 Jan 13 20:44:37.835278 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:44:37.835292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:44:37.835306 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:44:37.835320 kernel: AES CTR mode by8 optimization enabled Jan 13 20:44:37.806052 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:44:37.829274 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:44:37.851446 kernel: libata version 3.00 loaded. Jan 13 20:44:37.858247 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (463) Jan 13 20:44:37.861491 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (462) Jan 13 20:44:37.871419 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:44:37.893904 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:44:37.893926 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:44:37.894114 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:44:37.894278 kernel: scsi host0: ahci Jan 13 20:44:37.894516 kernel: scsi host1: ahci Jan 13 20:44:37.894702 kernel: scsi host2: ahci Jan 13 20:44:37.894887 kernel: scsi host3: ahci Jan 13 20:44:37.895069 kernel: scsi host4: ahci Jan 13 20:44:37.895243 kernel: scsi host5: ahci Jan 13 20:44:37.895474 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 20:44:37.895495 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 20:44:37.895508 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 20:44:37.895521 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 20:44:37.895534 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 20:44:37.895548 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 20:44:37.876305 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:44:37.884552 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:44:37.886689 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:44:37.907746 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:44:37.914844 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:44:37.928516 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:44:37.930856 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:44:37.930922 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:44:37.934936 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:44:37.938165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:44:37.939418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:37.942733 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:44:37.942753 disk-uuid[563]: Primary Header is updated. Jan 13 20:44:37.942753 disk-uuid[563]: Secondary Entries is updated. Jan 13 20:44:37.942753 disk-uuid[563]: Secondary Header is updated. Jan 13 20:44:37.942779 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:37.953659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:38.016990 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:38.029700 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:44:38.048918 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:44:38.204210 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:44:38.204292 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:44:38.204307 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 20:44:38.204322 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:44:38.205716 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:44:38.206419 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:44:38.207472 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 20:44:38.208943 kernel: ata3.00: applying bridge limits Jan 13 20:44:38.208965 kernel: ata3.00: configured for UDMA/100 Jan 13 20:44:38.209428 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:44:38.256438 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 20:44:38.274287 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:44:38.274316 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:44:38.976424 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:44:38.976807 disk-uuid[564]: The operation has completed successfully. Jan 13 20:44:39.007692 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:44:39.007809 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:44:39.040536 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:44:39.044029 sh[594]: Success Jan 13 20:44:39.056416 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 20:44:39.089238 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:44:39.103043 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:44:39.106235 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:44:39.118341 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 20:44:39.118374 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:44:39.118397 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:44:39.118408 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:44:39.119786 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:44:39.124432 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:44:39.127535 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:44:39.140542 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:44:39.143097 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:44:39.150668 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:44:39.150698 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:44:39.150709 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:44:39.154401 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:44:39.162724 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:44:39.164722 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:44:39.173505 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:44:39.181692 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:44:39.237172 ignition[666]: Ignition 2.20.0 Jan 13 20:44:39.237183 ignition[666]: Stage: fetch-offline Jan 13 20:44:39.237221 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:39.237231 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:39.237335 ignition[666]: parsed url from cmdline: "" Jan 13 20:44:39.237339 ignition[666]: no config URL provided Jan 13 20:44:39.237345 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:44:39.237354 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:44:39.237398 ignition[666]: op(1): [started] loading QEMU firmware config module Jan 13 20:44:39.237418 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:44:39.250757 ignition[666]: op(1): [finished] loading QEMU firmware config module Jan 13 20:44:39.292699 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:44:39.296956 ignition[666]: parsing config with SHA512: d959f821e42d2075c64b86a86330fb56b9da607be858d46c10c7e70bdf1d9e0d823c731c12047e815cb41273ba2bf76f210bdd4466ed24c765164d3aa25fe321 Jan 13 20:44:39.303086 unknown[666]: fetched base config from "system" Jan 13 20:44:39.303724 unknown[666]: fetched user config from "qemu" Jan 13 20:44:39.305153 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:44:39.305566 ignition[666]: fetch-offline: fetch-offline passed Jan 13 20:44:39.305679 ignition[666]: Ignition finished successfully Jan 13 20:44:39.309549 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:44:39.327991 systemd-networkd[783]: lo: Link UP Jan 13 20:44:39.328002 systemd-networkd[783]: lo: Gained carrier Jan 13 20:44:39.329658 systemd-networkd[783]: Enumeration completed Jan 13 20:44:39.329801 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:44:39.330098 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:44:39.330102 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:44:39.331239 systemd-networkd[783]: eth0: Link UP Jan 13 20:44:39.331243 systemd-networkd[783]: eth0: Gained carrier Jan 13 20:44:39.331251 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:44:39.331334 systemd[1]: Reached target network.target - Network. Jan 13 20:44:39.333977 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:44:39.343527 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:44:39.353436 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:44:39.359353 ignition[786]: Ignition 2.20.0 Jan 13 20:44:39.359365 ignition[786]: Stage: kargs Jan 13 20:44:39.359625 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:39.359636 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:39.360664 ignition[786]: kargs: kargs passed Jan 13 20:44:39.360714 ignition[786]: Ignition finished successfully Jan 13 20:44:39.367795 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:44:39.379507 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:44:39.393031 ignition[795]: Ignition 2.20.0 Jan 13 20:44:39.393045 ignition[795]: Stage: disks Jan 13 20:44:39.393225 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:39.393240 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:39.397295 ignition[795]: disks: disks passed Jan 13 20:44:39.397367 ignition[795]: Ignition finished successfully Jan 13 20:44:39.400817 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:44:39.401291 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:44:39.403109 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:44:39.405348 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:44:39.407929 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:44:39.408322 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:44:39.421783 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:44:39.471400 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:44:39.786066 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:44:39.800485 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:44:39.897407 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 20:44:39.898077 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:44:39.899307 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:44:39.907605 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:44:39.911205 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:44:39.912256 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:44:39.912306 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:44:39.924099 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Jan 13 20:44:39.924133 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:44:39.924153 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:44:39.924172 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:44:39.912338 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:44:39.926470 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:44:39.928933 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:44:39.937339 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:44:39.939309 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:44:39.980287 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:44:39.986056 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:44:39.993144 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:44:39.999051 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:44:40.094548 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:44:40.102579 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:44:40.119053 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:44:40.128200 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:44:40.130062 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:44:40.149698 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:44:40.301084 ignition[932]: INFO : Ignition 2.20.0 Jan 13 20:44:40.301084 ignition[932]: INFO : Stage: mount Jan 13 20:44:40.339666 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:40.339666 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:40.342593 ignition[932]: INFO : mount: mount passed Jan 13 20:44:40.343462 ignition[932]: INFO : Ignition finished successfully Jan 13 20:44:40.346326 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:44:40.356604 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:44:40.364123 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:44:40.378404 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Jan 13 20:44:40.378433 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 20:44:40.380947 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:44:40.380964 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:44:40.383406 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:44:40.385448 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:44:40.409301 ignition[958]: INFO : Ignition 2.20.0 Jan 13 20:44:40.409301 ignition[958]: INFO : Stage: files Jan 13 20:44:40.411308 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:40.411308 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:40.411308 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:44:40.411308 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:44:40.411308 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:44:40.417966 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:44:40.417966 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:44:40.417966 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:44:40.417966 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:44:40.417966 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:44:40.417966 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:44:40.417966 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:44:40.413521 unknown[958]: wrote ssh authorized keys file for user: core Jan 13 20:44:40.458609 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:44:40.548218 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:44:40.550434 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:44:40.904906 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:44:41.099585 systemd-networkd[783]: eth0: Gained IPv6LL Jan 13 20:44:41.381246 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:44:41.381246 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 13 20:44:41.385206 ignition[958]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:44:41.412946 ignition[958]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:44:41.418023 ignition[958]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:44:41.419616 ignition[958]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:44:41.419616 ignition[958]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:44:41.419616 ignition[958]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:44:41.419616 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:44:41.419616 ignition[958]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:44:41.419616 ignition[958]: INFO : files: files passed Jan 13 20:44:41.419616 ignition[958]: INFO : Ignition finished successfully Jan 13 20:44:41.421463 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:44:41.431560 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:44:41.433632 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:44:41.436669 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:44:41.436801 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:44:41.471560 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:44:41.474545 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:44:41.474545 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:44:41.477988 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:44:41.481540 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:44:41.483164 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:44:41.499536 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:44:41.551807 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:44:41.551946 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:44:41.554355 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:44:41.556533 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:44:41.557006 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:44:41.567513 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:44:41.580346 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:44:41.592504 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:44:41.601193 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:44:41.602545 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:44:41.604931 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:44:41.607182 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:44:41.607298 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:44:41.609625 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:44:41.611434 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:44:41.613657 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:44:41.615780 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:44:41.617974 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:44:41.649150 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:44:41.651538 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:44:41.654231 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:44:41.656526 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:44:41.658715 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:44:41.660489 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:44:41.660661 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:44:41.663177 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:44:41.665128 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:44:41.667698 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:44:41.667836 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:44:41.669984 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:44:41.670123 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:44:41.672443 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:44:41.672553 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:44:41.674632 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:44:41.676347 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:44:41.680426 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:44:41.682087 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:44:41.684140 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:44:41.685971 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:44:41.686065 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:44:41.688132 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:44:41.688219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:44:41.690690 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:44:41.690829 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:44:41.692825 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:44:41.692927 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:44:41.709551 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:44:41.711408 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:44:41.761257 ignition[1013]: INFO : Ignition 2.20.0 Jan 13 20:44:41.761257 ignition[1013]: INFO : Stage: umount Jan 13 20:44:41.761257 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:44:41.761257 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:44:41.761257 ignition[1013]: INFO : umount: umount passed Jan 13 20:44:41.761257 ignition[1013]: INFO : Ignition finished successfully Jan 13 20:44:41.758707 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:44:41.758852 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:44:41.761355 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:44:41.761475 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:44:41.764810 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:44:41.764919 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:44:41.769127 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:44:41.769236 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:44:41.771867 systemd[1]: Stopped target network.target - Network. Jan 13 20:44:41.773330 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:44:41.773418 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:44:41.775868 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:44:41.775918 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:44:41.777164 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:44:41.777211 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:44:41.779854 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:44:41.779913 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:44:41.782339 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:44:41.784530 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:44:41.787449 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 13 20:44:41.787767 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:44:41.790013 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:44:41.790172 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:44:41.791851 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:44:41.791996 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:44:41.795588 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:44:41.795647 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:44:41.809503 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:44:41.809906 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:44:41.809958 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:44:41.810720 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:44:41.810763 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:44:41.810901 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:44:41.810941 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:44:41.811140 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:44:41.811180 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:44:41.815451 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:44:41.827464 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:44:41.827619 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:44:41.847137 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:44:41.847317 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:44:41.898278 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:44:41.898326 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:44:41.901127 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:44:41.901165 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:44:41.901620 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:44:41.901666 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:44:41.902445 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:44:41.902488 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:44:41.903101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:44:41.903146 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:44:41.919540 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:44:41.919812 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:44:41.919865 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:44:41.922419 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:44:41.922467 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:44:41.925013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:44:41.925062 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:44:41.927740 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:44:41.927788 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:41.928335 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:44:41.928457 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:44:42.339374 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:44:42.339556 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:44:42.342009 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:44:42.343406 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:44:42.343470 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:44:42.358510 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:44:42.372781 systemd[1]: Switching root. Jan 13 20:44:42.405992 systemd-journald[193]: Journal stopped Jan 13 20:44:44.112274 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 13 20:44:44.112364 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:44:44.112400 kernel: SELinux: policy capability open_perms=1 Jan 13 20:44:44.112420 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:44:44.112442 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:44:44.112459 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:44:44.112474 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:44:44.112489 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:44:44.112517 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:44:44.112532 kernel: audit: type=1403 audit(1736801083.284:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:44:44.112559 systemd[1]: Successfully loaded SELinux policy in 40.121ms. Jan 13 20:44:44.112578 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.207ms. Jan 13 20:44:44.112594 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:44:44.112611 systemd[1]: Detected virtualization kvm. Jan 13 20:44:44.112627 systemd[1]: Detected architecture x86-64. Jan 13 20:44:44.112643 systemd[1]: Detected first boot. Jan 13 20:44:44.112658 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:44:44.112674 zram_generator::config[1074]: No configuration found. Jan 13 20:44:44.112695 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:44:44.112710 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:44:44.112726 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:44:44.112743 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:44:44.112760 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:44:44.112775 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:44:44.112791 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:44:44.112808 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:44:44.112826 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:44:44.112845 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:44:44.112861 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:44:44.112876 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:44:44.112892 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:44:44.112908 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:44:44.112924 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:44:44.112941 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:44:44.112957 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:44:44.112976 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:44:44.112992 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:44:44.113008 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:44:44.113023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:44:44.113040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:44:44.113056 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:44:44.113072 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:44:44.113088 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:44:44.113106 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:44:44.113122 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:44:44.113138 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:44:44.113153 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:44:44.113171 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:44:44.113187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:44:44.113203 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:44:44.113219 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:44:44.113235 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:44:44.113251 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:44:44.113270 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:44.113286 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:44:44.113302 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:44:44.113317 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:44:44.113334 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:44:44.113349 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:44:44.113371 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:44:44.113424 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:44:44.113444 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:44:44.113460 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:44:44.113476 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:44:44.113491 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:44:44.113507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:44:44.113531 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:44:44.113547 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 20:44:44.113566 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 20:44:44.113584 kernel: fuse: init (API version 7.39) Jan 13 20:44:44.113599 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:44:44.113616 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:44:44.113632 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:44:44.113651 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:44:44.113705 systemd-journald[1163]: Collecting audit messages is disabled. Jan 13 20:44:44.113740 systemd-journald[1163]: Journal started Jan 13 20:44:44.113771 systemd-journald[1163]: Runtime Journal (/run/log/journal/2c21834430704650ab20c8a4518ced98) is 6.0M, max 48.4M, 42.3M free. Jan 13 20:44:44.125637 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:44:44.128405 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:44.130408 kernel: ACPI: bus type drm_connector registered Jan 13 20:44:44.133813 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:44:44.138924 kernel: loop: module loaded Jan 13 20:44:44.138679 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:44:44.139970 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:44:44.141516 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:44:44.142827 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:44:44.143200 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:44:44.143854 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:44:44.144628 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:44:44.145336 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:44:44.145918 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:44:44.146158 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:44:44.146938 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:44:44.147171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:44:44.148016 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:44:44.148252 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:44:44.148910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:44:44.149116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:44:44.149699 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:44:44.149901 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:44:44.150551 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:44:44.150779 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:44:44.151506 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:44:44.152154 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:44:44.171603 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:44:44.176096 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:44:44.189552 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:44:44.192682 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:44:44.193936 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:44:44.196223 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:44:44.201683 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:44:44.202047 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:44:44.208562 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:44:44.210584 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:44:44.218766 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:44:44.225586 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:44:44.229454 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:44:44.239446 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:44:44.246112 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:44:44.250648 systemd-journald[1163]: Time spent on flushing to /var/log/journal/2c21834430704650ab20c8a4518ced98 is 19.074ms for 943 entries. Jan 13 20:44:44.250648 systemd-journald[1163]: System Journal (/var/log/journal/2c21834430704650ab20c8a4518ced98) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:44:44.320192 systemd-journald[1163]: Received client request to flush runtime journal. Jan 13 20:44:44.252153 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:44:44.257491 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:44:44.264602 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:44:44.272154 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:44:44.290853 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 13 20:44:44.290871 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 13 20:44:44.322310 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:44:44.324288 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:44:44.331300 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:44:44.337351 udevadm[1219]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:44:44.368197 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:44:44.377807 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:44:44.411760 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 13 20:44:44.411788 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 13 20:44:44.418883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:44:44.931716 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:44:44.944731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:44:44.970081 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jan 13 20:44:44.987128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:44:45.001646 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:44:45.021672 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:44:45.052655 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 20:44:45.055060 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1247) Jan 13 20:44:45.086288 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:44:45.137419 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:44:45.137898 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:44:45.142639 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:44:45.161622 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:44:45.161955 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:44:45.162143 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:44:45.169496 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 20:44:45.195128 systemd-networkd[1243]: lo: Link UP Jan 13 20:44:45.195142 systemd-networkd[1243]: lo: Gained carrier Jan 13 20:44:45.197175 systemd-networkd[1243]: Enumeration completed Jan 13 20:44:45.197328 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:44:45.213961 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:44:45.199113 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:44:45.199118 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:44:45.200052 systemd-networkd[1243]: eth0: Link UP Jan 13 20:44:45.200057 systemd-networkd[1243]: eth0: Gained carrier Jan 13 20:44:45.200072 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:44:45.206692 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:44:45.212457 systemd-networkd[1243]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:44:45.223304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:44:45.311813 kernel: kvm_amd: TSC scaling supported Jan 13 20:44:45.311889 kernel: kvm_amd: Nested Virtualization enabled Jan 13 20:44:45.311906 kernel: kvm_amd: Nested Paging enabled Jan 13 20:44:45.313004 kernel: kvm_amd: LBR virtualization supported Jan 13 20:44:45.313040 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 20:44:45.313674 kernel: kvm_amd: Virtual GIF supported Jan 13 20:44:45.334399 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:44:45.372805 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:44:45.394128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:44:45.407762 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:44:45.416774 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:44:45.450762 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:44:45.452522 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:44:45.500547 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:44:45.505261 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:44:45.544803 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:44:45.546482 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:44:45.547828 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:44:45.547856 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:44:45.548928 systemd[1]: Reached target machines.target - Containers. Jan 13 20:44:45.551016 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:44:45.568579 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:44:45.580010 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:44:45.581274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:44:45.582644 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:44:45.587075 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:44:45.591008 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:44:45.594390 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:44:45.602969 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:44:45.640417 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 20:44:45.659412 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:44:45.694409 kernel: loop1: detected capacity change from 0 to 140992 Jan 13 20:44:45.776190 kernel: loop2: detected capacity change from 0 to 211296 Jan 13 20:44:45.780293 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:44:45.783557 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:44:45.835423 kernel: loop3: detected capacity change from 0 to 138184 Jan 13 20:44:45.851415 kernel: loop4: detected capacity change from 0 to 140992 Jan 13 20:44:45.860413 kernel: loop5: detected capacity change from 0 to 211296 Jan 13 20:44:45.865684 (sd-merge)[1309]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:44:45.866283 (sd-merge)[1309]: Merged extensions into '/usr'. Jan 13 20:44:45.870905 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:44:45.870922 systemd[1]: Reloading... Jan 13 20:44:45.969420 zram_generator::config[1337]: No configuration found. Jan 13 20:44:46.088937 ldconfig[1293]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:44:46.139611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:44:46.203928 systemd[1]: Reloading finished in 332 ms. Jan 13 20:44:46.224243 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:44:46.225924 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:44:46.243589 systemd[1]: Starting ensure-sysext.service... Jan 13 20:44:46.246638 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:44:46.254508 systemd[1]: Reloading requested from client PID 1381 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:44:46.254525 systemd[1]: Reloading... Jan 13 20:44:46.318420 zram_generator::config[1411]: No configuration found. Jan 13 20:44:46.321704 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:44:46.322087 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:44:46.323068 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:44:46.323363 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Jan 13 20:44:46.323504 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Jan 13 20:44:46.326810 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:44:46.326824 systemd-tmpfiles[1382]: Skipping /boot Jan 13 20:44:46.337170 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:44:46.337342 systemd-tmpfiles[1382]: Skipping /boot Jan 13 20:44:46.441170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:44:46.505704 systemd[1]: Reloading finished in 250 ms. Jan 13 20:44:46.524453 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:44:46.547576 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:44:46.550515 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:44:46.553339 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:44:46.557655 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:44:46.562654 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:44:46.568946 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:46.569108 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:44:46.571585 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:44:46.577475 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:44:46.586658 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:44:46.589936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:44:46.590090 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:46.591342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:44:46.591688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:44:46.593681 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:44:46.596546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:44:46.600779 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:44:46.604200 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:44:46.604487 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:44:46.614499 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:44:46.624268 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:46.624579 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:44:46.631184 augenrules[1496]: No rules Jan 13 20:44:46.634696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:44:46.638310 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:44:46.643599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:44:46.650526 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:44:46.651661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:44:46.653649 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:44:46.654733 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:44:46.656713 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:44:46.657195 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:44:46.659052 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:44:46.659355 systemd-resolved[1459]: Positive Trust Anchors: Jan 13 20:44:46.659415 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:44:46.659554 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:44:46.661788 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:44:46.662224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:44:46.665999 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:44:46.666552 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:44:46.667583 systemd-resolved[1459]: Defaulting to hostname 'linux'. Jan 13 20:44:46.669188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:44:46.669868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:44:46.671958 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:44:46.674505 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:44:46.674892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:44:46.677840 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:44:46.684235 systemd[1]: Finished ensure-sysext.service. Jan 13 20:44:46.692724 systemd[1]: Reached target network.target - Network. Jan 13 20:44:46.693815 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:44:46.695206 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:44:46.695298 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:44:46.708765 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:44:46.710023 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:44:46.775933 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:44:46.821979 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:44:46.822568 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:44:46.822632 systemd-timesyncd[1522]: Initial clock synchronization to Mon 2025-01-13 20:44:47.211956 UTC. Jan 13 20:44:46.823461 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:44:46.824880 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:44:46.826195 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:44:46.827469 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:44:46.827497 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:44:46.828421 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:44:46.829666 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:44:46.831021 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:44:46.832308 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:44:46.833729 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:44:46.836926 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:44:46.839923 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:44:46.849315 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:44:46.883317 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:44:46.884394 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:44:46.885710 systemd[1]: System is tainted: cgroupsv1 Jan 13 20:44:46.885768 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:44:46.885799 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:44:46.888043 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:44:46.891038 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:44:46.893660 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:44:46.930095 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:44:46.932781 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:44:46.935411 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:44:46.938193 jq[1528]: false Jan 13 20:44:46.938674 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:44:46.941567 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:44:46.946721 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:44:46.971524 extend-filesystems[1530]: Found loop3 Jan 13 20:44:46.971524 extend-filesystems[1530]: Found loop4 Jan 13 20:44:46.971524 extend-filesystems[1530]: Found loop5 Jan 13 20:44:46.971524 extend-filesystems[1530]: Found sr0 Jan 13 20:44:46.977357 extend-filesystems[1530]: Found vda Jan 13 20:44:46.977357 extend-filesystems[1530]: Found vda1 Jan 13 20:44:46.977357 extend-filesystems[1530]: Found vda2 Jan 13 20:44:46.977357 extend-filesystems[1530]: Found vda3 Jan 13 20:44:46.977357 extend-filesystems[1530]: Found usr Jan 13 20:44:46.977357 extend-filesystems[1530]: Found vda4 Jan 13 20:44:46.977357 extend-filesystems[1530]: Found vda6 Jan 13 20:44:46.977357 extend-filesystems[1530]: Found vda7 Jan 13 20:44:46.977357 extend-filesystems[1530]: Found vda9 Jan 13 20:44:46.977357 extend-filesystems[1530]: Checking size of /dev/vda9 Jan 13 20:44:46.985056 dbus-daemon[1527]: [system] SELinux support is enabled Jan 13 20:44:46.977581 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:44:46.980675 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:44:47.019240 systemd-networkd[1243]: eth0: Gained IPv6LL Jan 13 20:44:47.022841 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:44:47.027542 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:44:47.027952 extend-filesystems[1530]: Resized partition /dev/vda9 Jan 13 20:44:47.052708 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1252) Jan 13 20:44:47.052779 extend-filesystems[1554]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:44:47.056417 update_engine[1546]: I20250113 20:44:47.037413 1546 main.cc:92] Flatcar Update Engine starting Jan 13 20:44:47.056417 update_engine[1546]: I20250113 20:44:47.041759 1546 update_check_scheduler.cc:74] Next update check in 10m40s Jan 13 20:44:47.063040 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:44:47.091432 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:44:47.094256 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:44:47.096193 jq[1553]: true Jan 13 20:44:47.098035 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:44:47.098385 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:44:47.098780 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:44:47.099102 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:44:47.101780 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:44:47.102096 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:44:47.113069 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:44:47.116342 jq[1562]: true Jan 13 20:44:47.134947 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:44:47.136710 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:44:47.147601 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:44:47.154881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:44:47.176592 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:44:47.180548 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:44:47.233075 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:44:47.233597 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:44:47.236039 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:44:47.236066 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:44:47.238754 systemd-logind[1542]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:44:47.238784 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:44:47.239353 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:44:47.240126 systemd-logind[1542]: New seat seat0. Jan 13 20:44:47.245670 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:44:47.246946 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:44:47.249971 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:44:47.325589 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:44:47.342451 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:44:47.342801 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:44:47.343848 locksmithd[1603]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:44:47.359964 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:44:47.364709 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:44:47.365168 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:44:47.368087 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:44:47.368636 tar[1560]: linux-amd64/helm Jan 13 20:44:47.370637 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:44:47.441589 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:44:47.452692 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:44:47.462664 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:44:47.464085 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:44:47.499463 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:44:48.286521 extend-filesystems[1554]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:44:48.286521 extend-filesystems[1554]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:44:48.286521 extend-filesystems[1554]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:44:48.340415 extend-filesystems[1530]: Resized filesystem in /dev/vda9 Jan 13 20:44:48.291800 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:44:48.341859 containerd[1563]: time="2025-01-13T20:44:48.341607111Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:44:48.292144 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:44:48.367416 containerd[1563]: time="2025-01-13T20:44:48.367330735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:48.369627 containerd[1563]: time="2025-01-13T20:44:48.369575907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:44:48.369627 containerd[1563]: time="2025-01-13T20:44:48.369615987Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:44:48.369707 containerd[1563]: time="2025-01-13T20:44:48.369653581Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:44:48.369908 containerd[1563]: time="2025-01-13T20:44:48.369889912Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:44:48.369931 containerd[1563]: time="2025-01-13T20:44:48.369910657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:48.370027 containerd[1563]: time="2025-01-13T20:44:48.370007811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:44:48.370027 containerd[1563]: time="2025-01-13T20:44:48.370025829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:48.370328 containerd[1563]: time="2025-01-13T20:44:48.370299336Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:44:48.370328 containerd[1563]: time="2025-01-13T20:44:48.370318609Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:48.370374 containerd[1563]: time="2025-01-13T20:44:48.370330570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:44:48.370374 containerd[1563]: time="2025-01-13T20:44:48.370341151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:48.370505 containerd[1563]: time="2025-01-13T20:44:48.370479555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:48.370774 containerd[1563]: time="2025-01-13T20:44:48.370741958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:44:48.370941 containerd[1563]: time="2025-01-13T20:44:48.370913077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:44:48.370941 containerd[1563]: time="2025-01-13T20:44:48.370930135Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:44:48.371089 containerd[1563]: time="2025-01-13T20:44:48.371062157Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:44:48.371161 containerd[1563]: time="2025-01-13T20:44:48.371134116Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:44:48.742464 tar[1560]: linux-amd64/LICENSE Jan 13 20:44:48.742869 tar[1560]: linux-amd64/README.md Jan 13 20:44:48.842212 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:44:49.016678 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:44:49.019855 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:44:49.030145 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:44:49.087437 containerd[1563]: time="2025-01-13T20:44:49.087336054Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:44:49.087572 containerd[1563]: time="2025-01-13T20:44:49.087470062Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:44:49.087572 containerd[1563]: time="2025-01-13T20:44:49.087501447Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:44:49.087572 containerd[1563]: time="2025-01-13T20:44:49.087520537Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:44:49.087572 containerd[1563]: time="2025-01-13T20:44:49.087539762Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:44:49.088032 containerd[1563]: time="2025-01-13T20:44:49.087985547Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:44:49.089003 containerd[1563]: time="2025-01-13T20:44:49.088929433Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:44:49.096521 containerd[1563]: time="2025-01-13T20:44:49.096475136Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:44:49.096521 containerd[1563]: time="2025-01-13T20:44:49.096512694Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:44:49.096521 containerd[1563]: time="2025-01-13T20:44:49.096536003Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096555312Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096574350Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096594241Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096629846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096648322Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096681037Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096697239Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096714791Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096742424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096759686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096806399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.096814 containerd[1563]: time="2025-01-13T20:44:49.096824419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.096841930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.096858921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.096873845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.096889828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.096908804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.096935024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.096951983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.096981654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.097010679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.097045130Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.097070133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.097087259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097210 containerd[1563]: time="2025-01-13T20:44:49.097121273Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:44:49.097592 containerd[1563]: time="2025-01-13T20:44:49.097234039Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:44:49.097592 containerd[1563]: time="2025-01-13T20:44:49.097260487Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:44:49.097592 containerd[1563]: time="2025-01-13T20:44:49.097295395Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:44:49.097592 containerd[1563]: time="2025-01-13T20:44:49.097314755Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:44:49.097592 containerd[1563]: time="2025-01-13T20:44:49.097327361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.097592 containerd[1563]: time="2025-01-13T20:44:49.097348967Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:44:49.097592 containerd[1563]: time="2025-01-13T20:44:49.097362892Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:44:49.097592 containerd[1563]: time="2025-01-13T20:44:49.097376704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:44:49.098066 containerd[1563]: time="2025-01-13T20:44:49.097973612Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:44:49.098066 containerd[1563]: time="2025-01-13T20:44:49.098065719Z" level=info msg="Connect containerd service" Jan 13 20:44:49.098364 containerd[1563]: time="2025-01-13T20:44:49.098095316Z" level=info msg="using legacy CRI server" Jan 13 20:44:49.098364 containerd[1563]: time="2025-01-13T20:44:49.098106893Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:44:49.098364 containerd[1563]: time="2025-01-13T20:44:49.098296219Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:44:49.099590 containerd[1563]: time="2025-01-13T20:44:49.099535907Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:44:49.099806 containerd[1563]: time="2025-01-13T20:44:49.099769473Z" level=info msg="Start subscribing containerd event" Jan 13 20:44:49.099935 containerd[1563]: time="2025-01-13T20:44:49.099911265Z" level=info msg="Start recovering state" Jan 13 20:44:49.100146 containerd[1563]: time="2025-01-13T20:44:49.100122611Z" level=info msg="Start event monitor" Jan 13 20:44:49.100180 containerd[1563]: time="2025-01-13T20:44:49.100168378Z" level=info msg="Start snapshots syncer" Jan 13 20:44:49.100202 containerd[1563]: time="2025-01-13T20:44:49.100183364Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:44:49.100202 containerd[1563]: time="2025-01-13T20:44:49.100193621Z" level=info msg="Start streaming server" Jan 13 20:44:49.100853 containerd[1563]: time="2025-01-13T20:44:49.100128223Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:44:49.100853 containerd[1563]: time="2025-01-13T20:44:49.100493551Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:44:49.100684 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:44:49.101041 containerd[1563]: time="2025-01-13T20:44:49.101011666Z" level=info msg="containerd successfully booted in 1.121804s" Jan 13 20:44:49.705404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:44:49.707182 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:44:49.709555 systemd[1]: Startup finished in 7.385s (kernel) + 6.464s (userspace) = 13.850s. Jan 13 20:44:49.732932 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:44:50.496562 kubelet[1664]: E0113 20:44:50.496439 1664 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:44:50.500706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:44:50.500978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:44:55.588106 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:44:55.603735 systemd[1]: Started sshd@0-10.0.0.149:22-10.0.0.1:33064.service - OpenSSH per-connection server daemon (10.0.0.1:33064). Jan 13 20:44:55.649447 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 33064 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:44:55.651910 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:55.660677 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:44:55.669609 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:44:55.671457 systemd-logind[1542]: New session 1 of user core. Jan 13 20:44:55.683208 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:44:55.685419 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:44:55.693209 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:44:55.804320 systemd[1684]: Queued start job for default target default.target. Jan 13 20:44:55.804792 systemd[1684]: Created slice app.slice - User Application Slice. Jan 13 20:44:55.804817 systemd[1684]: Reached target paths.target - Paths. Jan 13 20:44:55.804831 systemd[1684]: Reached target timers.target - Timers. Jan 13 20:44:55.814511 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:44:55.820860 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:44:55.820925 systemd[1684]: Reached target sockets.target - Sockets. Jan 13 20:44:55.820938 systemd[1684]: Reached target basic.target - Basic System. Jan 13 20:44:55.820974 systemd[1684]: Reached target default.target - Main User Target. Jan 13 20:44:55.821004 systemd[1684]: Startup finished in 120ms. Jan 13 20:44:55.821539 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:44:55.823515 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:44:55.881691 systemd[1]: Started sshd@1-10.0.0.149:22-10.0.0.1:33072.service - OpenSSH per-connection server daemon (10.0.0.1:33072). Jan 13 20:44:55.917030 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 33072 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:44:55.919036 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:55.923784 systemd-logind[1542]: New session 2 of user core. Jan 13 20:44:55.933674 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:44:55.990697 sshd[1699]: Connection closed by 10.0.0.1 port 33072 Jan 13 20:44:55.991088 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:56.003839 systemd[1]: Started sshd@2-10.0.0.149:22-10.0.0.1:33084.service - OpenSSH per-connection server daemon (10.0.0.1:33084). Jan 13 20:44:56.004548 systemd[1]: sshd@1-10.0.0.149:22-10.0.0.1:33072.service: Deactivated successfully. Jan 13 20:44:56.006430 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:44:56.007112 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:44:56.008468 systemd-logind[1542]: Removed session 2. Jan 13 20:44:56.035730 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 33084 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:44:56.037230 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:56.041111 systemd-logind[1542]: New session 3 of user core. Jan 13 20:44:56.050669 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:44:56.100260 sshd[1707]: Connection closed by 10.0.0.1 port 33084 Jan 13 20:44:56.100627 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:56.114662 systemd[1]: Started sshd@3-10.0.0.149:22-10.0.0.1:33092.service - OpenSSH per-connection server daemon (10.0.0.1:33092). Jan 13 20:44:56.115354 systemd[1]: sshd@2-10.0.0.149:22-10.0.0.1:33084.service: Deactivated successfully. Jan 13 20:44:56.117389 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:44:56.118071 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:44:56.119431 systemd-logind[1542]: Removed session 3. Jan 13 20:44:56.148864 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 33092 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:44:56.150672 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:56.154856 systemd-logind[1542]: New session 4 of user core. Jan 13 20:44:56.165637 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:44:56.219596 sshd[1715]: Connection closed by 10.0.0.1 port 33092 Jan 13 20:44:56.219978 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:56.231669 systemd[1]: Started sshd@4-10.0.0.149:22-10.0.0.1:33106.service - OpenSSH per-connection server daemon (10.0.0.1:33106). Jan 13 20:44:56.232167 systemd[1]: sshd@3-10.0.0.149:22-10.0.0.1:33092.service: Deactivated successfully. Jan 13 20:44:56.234562 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:44:56.235627 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:44:56.236388 systemd-logind[1542]: Removed session 4. Jan 13 20:44:56.265328 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 33106 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:44:56.266921 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:56.270944 systemd-logind[1542]: New session 5 of user core. Jan 13 20:44:56.280657 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:44:56.340164 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:44:56.340528 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:44:56.359680 sudo[1724]: pam_unix(sudo:session): session closed for user root Jan 13 20:44:56.361606 sshd[1723]: Connection closed by 10.0.0.1 port 33106 Jan 13 20:44:56.361993 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:56.370658 systemd[1]: Started sshd@5-10.0.0.149:22-10.0.0.1:33110.service - OpenSSH per-connection server daemon (10.0.0.1:33110). Jan 13 20:44:56.371446 systemd[1]: sshd@4-10.0.0.149:22-10.0.0.1:33106.service: Deactivated successfully. Jan 13 20:44:56.373233 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:44:56.373849 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:44:56.374903 systemd-logind[1542]: Removed session 5. Jan 13 20:44:56.402657 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 33110 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:44:56.404351 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:56.408559 systemd-logind[1542]: New session 6 of user core. Jan 13 20:44:56.418653 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:44:56.472731 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:44:56.473099 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:44:56.477625 sudo[1734]: pam_unix(sudo:session): session closed for user root Jan 13 20:44:56.484455 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:44:56.484781 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:44:56.510706 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:44:56.547277 augenrules[1756]: No rules Jan 13 20:44:56.549213 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:44:56.549590 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:44:56.551195 sudo[1733]: pam_unix(sudo:session): session closed for user root Jan 13 20:44:56.552938 sshd[1732]: Connection closed by 10.0.0.1 port 33110 Jan 13 20:44:56.553323 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Jan 13 20:44:56.565677 systemd[1]: Started sshd@6-10.0.0.149:22-10.0.0.1:33122.service - OpenSSH per-connection server daemon (10.0.0.1:33122). Jan 13 20:44:56.566660 systemd[1]: sshd@5-10.0.0.149:22-10.0.0.1:33110.service: Deactivated successfully. Jan 13 20:44:56.569460 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:44:56.570306 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:44:56.571427 systemd-logind[1542]: Removed session 6. Jan 13 20:44:56.600785 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 33122 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:44:56.602571 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:44:56.608186 systemd-logind[1542]: New session 7 of user core. Jan 13 20:44:56.617740 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:44:56.673963 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:44:56.674336 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:44:57.276812 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:44:57.276952 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:44:57.793114 dockerd[1791]: time="2025-01-13T20:44:57.793050442Z" level=info msg="Starting up" Jan 13 20:44:58.893296 dockerd[1791]: time="2025-01-13T20:44:58.893223883Z" level=info msg="Loading containers: start." Jan 13 20:44:59.065424 kernel: Initializing XFRM netlink socket Jan 13 20:44:59.155222 systemd-networkd[1243]: docker0: Link UP Jan 13 20:44:59.193772 dockerd[1791]: time="2025-01-13T20:44:59.193722744Z" level=info msg="Loading containers: done." Jan 13 20:44:59.216223 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4032700043-merged.mount: Deactivated successfully. Jan 13 20:44:59.216659 dockerd[1791]: time="2025-01-13T20:44:59.216521212Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:44:59.216659 dockerd[1791]: time="2025-01-13T20:44:59.216617557Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:44:59.216786 dockerd[1791]: time="2025-01-13T20:44:59.216735005Z" level=info msg="Daemon has completed initialization" Jan 13 20:44:59.252157 dockerd[1791]: time="2025-01-13T20:44:59.252094912Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:44:59.252283 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:45:00.183943 containerd[1563]: time="2025-01-13T20:45:00.183895713Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:45:00.751267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:45:00.766627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:00.926907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:00.932127 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:01.068823 kubelet[2008]: E0113 20:45:01.068634 2008 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:01.076751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:01.077090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:01.495994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094496722.mount: Deactivated successfully. Jan 13 20:45:02.963083 containerd[1563]: time="2025-01-13T20:45:02.963013107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:02.964061 containerd[1563]: time="2025-01-13T20:45:02.964026791Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 20:45:02.965655 containerd[1563]: time="2025-01-13T20:45:02.965622125Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:02.968837 containerd[1563]: time="2025-01-13T20:45:02.968801760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:02.970091 containerd[1563]: time="2025-01-13T20:45:02.970032788Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.7860815s" Jan 13 20:45:02.970243 containerd[1563]: time="2025-01-13T20:45:02.970105407Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:45:03.001305 containerd[1563]: time="2025-01-13T20:45:03.001257444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:45:06.592688 containerd[1563]: time="2025-01-13T20:45:06.592602789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:06.623194 containerd[1563]: time="2025-01-13T20:45:06.623120615Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 20:45:06.649226 containerd[1563]: time="2025-01-13T20:45:06.649174881Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:06.663643 containerd[1563]: time="2025-01-13T20:45:06.663585421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:06.664668 containerd[1563]: time="2025-01-13T20:45:06.664612941Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 3.663312695s" Jan 13 20:45:06.664668 containerd[1563]: time="2025-01-13T20:45:06.664661176Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:45:06.693304 containerd[1563]: time="2025-01-13T20:45:06.693267601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:45:08.733531 containerd[1563]: time="2025-01-13T20:45:08.733465774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:08.736996 containerd[1563]: time="2025-01-13T20:45:08.736954464Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 20:45:08.739590 containerd[1563]: time="2025-01-13T20:45:08.739567591Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:08.743623 containerd[1563]: time="2025-01-13T20:45:08.743577265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:08.744464 containerd[1563]: time="2025-01-13T20:45:08.744432348Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 2.050958186s" Jan 13 20:45:08.744512 containerd[1563]: time="2025-01-13T20:45:08.744463457Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:45:08.766418 containerd[1563]: time="2025-01-13T20:45:08.766358560Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:45:10.036093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208394195.mount: Deactivated successfully. Jan 13 20:45:11.059674 containerd[1563]: time="2025-01-13T20:45:11.059609577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:11.060994 containerd[1563]: time="2025-01-13T20:45:11.060949435Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:45:11.062265 containerd[1563]: time="2025-01-13T20:45:11.062227888Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:11.065009 containerd[1563]: time="2025-01-13T20:45:11.064962588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:11.065821 containerd[1563]: time="2025-01-13T20:45:11.065755635Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.299342803s" Jan 13 20:45:11.065821 containerd[1563]: time="2025-01-13T20:45:11.065792316Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:45:11.092116 containerd[1563]: time="2025-01-13T20:45:11.092052921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:45:11.327537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:45:11.337591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:11.472475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:11.477006 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:45:11.674883 kubelet[2124]: E0113 20:45:11.674649 2124 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:45:11.679585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:45:11.679876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:45:12.293106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127037766.mount: Deactivated successfully. Jan 13 20:45:13.138253 containerd[1563]: time="2025-01-13T20:45:13.138193311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:13.139078 containerd[1563]: time="2025-01-13T20:45:13.139047631Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:45:13.140285 containerd[1563]: time="2025-01-13T20:45:13.140247530Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:13.143294 containerd[1563]: time="2025-01-13T20:45:13.143249414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:13.144562 containerd[1563]: time="2025-01-13T20:45:13.144508944Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.052405021s" Jan 13 20:45:13.144611 containerd[1563]: time="2025-01-13T20:45:13.144560850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:45:13.165593 containerd[1563]: time="2025-01-13T20:45:13.165559445Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:45:14.089368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518503079.mount: Deactivated successfully. Jan 13 20:45:14.095770 containerd[1563]: time="2025-01-13T20:45:14.095729270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:14.096494 containerd[1563]: time="2025-01-13T20:45:14.096438957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:45:14.097718 containerd[1563]: time="2025-01-13T20:45:14.097667803Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:14.099985 containerd[1563]: time="2025-01-13T20:45:14.099951989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:14.100867 containerd[1563]: time="2025-01-13T20:45:14.100833655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 935.240076ms" Jan 13 20:45:14.100913 containerd[1563]: time="2025-01-13T20:45:14.100867463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:45:14.123532 containerd[1563]: time="2025-01-13T20:45:14.123484598Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:45:15.190587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3509368610.mount: Deactivated successfully. Jan 13 20:45:18.047245 containerd[1563]: time="2025-01-13T20:45:18.047145428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:18.049398 containerd[1563]: time="2025-01-13T20:45:18.049293298Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 20:45:18.050323 containerd[1563]: time="2025-01-13T20:45:18.050267177Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:18.055397 containerd[1563]: time="2025-01-13T20:45:18.055344275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:18.056543 containerd[1563]: time="2025-01-13T20:45:18.056457184Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.932929022s" Jan 13 20:45:18.056591 containerd[1563]: time="2025-01-13T20:45:18.056549217Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:45:20.968718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:20.985675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:21.003295 systemd[1]: Reloading requested from client PID 2328 ('systemctl') (unit session-7.scope)... Jan 13 20:45:21.003322 systemd[1]: Reloading... Jan 13 20:45:21.089486 zram_generator::config[2371]: No configuration found. Jan 13 20:45:21.385372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:21.472704 systemd[1]: Reloading finished in 468 ms. Jan 13 20:45:21.521675 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:45:21.521780 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:45:21.522189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:21.525506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:21.674893 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:21.691004 (kubelet)[2428]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:45:21.740502 kubelet[2428]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:21.740502 kubelet[2428]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:45:21.740502 kubelet[2428]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:21.740941 kubelet[2428]: I0113 20:45:21.740550 2428 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:45:22.123313 kubelet[2428]: I0113 20:45:22.123190 2428 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:45:22.123313 kubelet[2428]: I0113 20:45:22.123226 2428 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:45:22.123640 kubelet[2428]: I0113 20:45:22.123569 2428 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:45:22.146521 kubelet[2428]: E0113 20:45:22.146471 2428 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:22.149416 kubelet[2428]: I0113 20:45:22.149392 2428 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:45:22.161128 kubelet[2428]: I0113 20:45:22.161093 2428 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:45:22.162347 kubelet[2428]: I0113 20:45:22.162324 2428 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:45:22.162568 kubelet[2428]: I0113 20:45:22.162543 2428 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:45:22.162732 kubelet[2428]: I0113 20:45:22.162572 2428 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:45:22.162732 kubelet[2428]: I0113 20:45:22.162584 2428 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:45:22.162732 kubelet[2428]: I0113 20:45:22.162709 2428 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:22.162824 kubelet[2428]: I0113 20:45:22.162810 2428 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:45:22.162824 kubelet[2428]: I0113 20:45:22.162825 2428 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:45:22.162878 kubelet[2428]: I0113 20:45:22.162856 2428 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:45:22.162878 kubelet[2428]: I0113 20:45:22.162868 2428 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:45:22.163512 kubelet[2428]: W0113 20:45:22.163397 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:22.163512 kubelet[2428]: W0113 20:45:22.163397 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:22.163512 kubelet[2428]: E0113 20:45:22.163480 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:22.163512 kubelet[2428]: E0113 20:45:22.163489 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:22.164020 kubelet[2428]: I0113 20:45:22.164000 2428 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:45:22.166571 kubelet[2428]: I0113 20:45:22.166538 2428 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:45:22.167779 kubelet[2428]: W0113 20:45:22.167748 2428 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:45:22.168465 kubelet[2428]: I0113 20:45:22.168410 2428 server.go:1256] "Started kubelet" Jan 13 20:45:22.169079 kubelet[2428]: I0113 20:45:22.168846 2428 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:45:22.175331 kubelet[2428]: I0113 20:45:22.175309 2428 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:45:22.175594 kubelet[2428]: I0113 20:45:22.175549 2428 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:45:22.177723 kubelet[2428]: I0113 20:45:22.177514 2428 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:45:22.177980 kubelet[2428]: E0113 20:45:22.177955 2428 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.149:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.149:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5b6b2c587c5a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:45:22.168372314 +0000 UTC m=+0.472839539,LastTimestamp:2025-01-13 20:45:22.168372314 +0000 UTC m=+0.472839539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:45:22.178078 kubelet[2428]: I0113 20:45:22.178041 2428 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:45:22.178198 kubelet[2428]: I0113 20:45:22.178180 2428 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:45:22.178295 kubelet[2428]: I0113 20:45:22.178278 2428 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:45:22.178631 kubelet[2428]: W0113 20:45:22.178590 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:22.178671 kubelet[2428]: E0113 20:45:22.178638 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:22.179295 kubelet[2428]: E0113 20:45:22.179173 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="200ms" Jan 13 20:45:22.181529 kubelet[2428]: I0113 20:45:22.180195 2428 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:45:22.181529 kubelet[2428]: E0113 20:45:22.180548 2428 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:45:22.181529 kubelet[2428]: I0113 20:45:22.180727 2428 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:45:22.181529 kubelet[2428]: I0113 20:45:22.180838 2428 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:45:22.182775 kubelet[2428]: I0113 20:45:22.182751 2428 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:45:22.196799 kubelet[2428]: I0113 20:45:22.196640 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:45:22.198255 kubelet[2428]: I0113 20:45:22.198237 2428 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:45:22.198353 kubelet[2428]: I0113 20:45:22.198341 2428 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:45:22.202769 kubelet[2428]: I0113 20:45:22.202742 2428 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:45:22.202951 kubelet[2428]: E0113 20:45:22.202938 2428 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:45:22.204561 kubelet[2428]: W0113 20:45:22.204526 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:22.204624 kubelet[2428]: E0113 20:45:22.204565 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:22.214518 kubelet[2428]: I0113 20:45:22.214476 2428 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:45:22.214518 kubelet[2428]: I0113 20:45:22.214516 2428 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:45:22.214674 kubelet[2428]: I0113 20:45:22.214534 2428 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:22.279458 kubelet[2428]: I0113 20:45:22.279425 2428 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:22.279814 kubelet[2428]: E0113 20:45:22.279795 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jan 13 20:45:22.304076 kubelet[2428]: E0113 20:45:22.304037 2428 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:45:22.379890 kubelet[2428]: E0113 20:45:22.379792 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="400ms" Jan 13 20:45:22.481358 kubelet[2428]: I0113 20:45:22.481325 2428 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:22.481711 kubelet[2428]: E0113 20:45:22.481675 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jan 13 20:45:22.504923 kubelet[2428]: E0113 20:45:22.504896 2428 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:45:22.781358 kubelet[2428]: E0113 20:45:22.781190 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="800ms" Jan 13 20:45:22.831745 kubelet[2428]: I0113 20:45:22.831681 2428 policy_none.go:49] "None policy: Start" Jan 13 20:45:22.832691 kubelet[2428]: I0113 20:45:22.832658 2428 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:45:22.832691 kubelet[2428]: I0113 20:45:22.832698 2428 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:45:22.877685 kubelet[2428]: I0113 20:45:22.877652 2428 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:45:22.878307 kubelet[2428]: I0113 20:45:22.877945 2428 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:45:22.879472 kubelet[2428]: E0113 20:45:22.879456 2428 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:45:22.883726 kubelet[2428]: I0113 20:45:22.883679 2428 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:22.884101 kubelet[2428]: E0113 20:45:22.884073 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jan 13 20:45:22.905408 kubelet[2428]: I0113 20:45:22.905355 2428 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:45:22.906639 kubelet[2428]: I0113 20:45:22.906607 2428 topology_manager.go:215] "Topology Admit Handler" podUID="4b1b27f1502c3f173e7ff7098b2f17e3" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:45:22.907451 kubelet[2428]: I0113 20:45:22.907431 2428 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:45:22.982340 kubelet[2428]: I0113 20:45:22.982279 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b1b27f1502c3f173e7ff7098b2f17e3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b1b27f1502c3f173e7ff7098b2f17e3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:22.982340 kubelet[2428]: I0113 20:45:22.982335 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:22.982536 kubelet[2428]: I0113 20:45:22.982366 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:22.982536 kubelet[2428]: I0113 20:45:22.982410 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:45:22.982536 kubelet[2428]: I0113 20:45:22.982470 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b1b27f1502c3f173e7ff7098b2f17e3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b1b27f1502c3f173e7ff7098b2f17e3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:22.982536 kubelet[2428]: I0113 20:45:22.982496 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b1b27f1502c3f173e7ff7098b2f17e3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b1b27f1502c3f173e7ff7098b2f17e3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:22.982536 kubelet[2428]: I0113 20:45:22.982523 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:22.982678 kubelet[2428]: I0113 20:45:22.982560 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:22.982678 kubelet[2428]: I0113 20:45:22.982614 2428 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:23.185733 kubelet[2428]: W0113 20:45:23.185604 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:23.185733 kubelet[2428]: E0113 20:45:23.185660 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:23.210811 kubelet[2428]: E0113 20:45:23.210776 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:23.211349 containerd[1563]: time="2025-01-13T20:45:23.211293716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:23.213456 kubelet[2428]: E0113 20:45:23.213432 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:23.213540 kubelet[2428]: E0113 20:45:23.213524 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:23.213709 containerd[1563]: time="2025-01-13T20:45:23.213686335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b1b27f1502c3f173e7ff7098b2f17e3,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:23.213821 containerd[1563]: time="2025-01-13T20:45:23.213795659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:23.314856 kubelet[2428]: W0113 20:45:23.314789 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:23.314856 kubelet[2428]: E0113 20:45:23.314845 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.149:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:23.341130 kubelet[2428]: W0113 20:45:23.341091 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:23.341130 kubelet[2428]: E0113 20:45:23.341119 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:23.581881 kubelet[2428]: E0113 20:45:23.581753 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="1.6s" Jan 13 20:45:23.685761 kubelet[2428]: I0113 20:45:23.685731 2428 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:23.686090 kubelet[2428]: E0113 20:45:23.686064 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jan 13 20:45:23.724748 kubelet[2428]: W0113 20:45:23.724672 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:23.724748 kubelet[2428]: E0113 20:45:23.724737 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:24.243470 kubelet[2428]: E0113 20:45:24.243422 2428 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:24.830321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2560019710.mount: Deactivated successfully. Jan 13 20:45:24.842136 containerd[1563]: time="2025-01-13T20:45:24.842054501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:24.847145 containerd[1563]: time="2025-01-13T20:45:24.847052526Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:45:24.848875 containerd[1563]: time="2025-01-13T20:45:24.848794803Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:24.851480 containerd[1563]: time="2025-01-13T20:45:24.851442441Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:24.852673 containerd[1563]: time="2025-01-13T20:45:24.852632480Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:45:24.853777 containerd[1563]: time="2025-01-13T20:45:24.853735932Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:24.854794 containerd[1563]: time="2025-01-13T20:45:24.854704084Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:45:24.858335 containerd[1563]: time="2025-01-13T20:45:24.857070746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:45:24.858335 containerd[1563]: time="2025-01-13T20:45:24.858303702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.646869385s" Jan 13 20:45:24.860347 containerd[1563]: time="2025-01-13T20:45:24.860299811Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.64643846s" Jan 13 20:45:24.865512 containerd[1563]: time="2025-01-13T20:45:24.865454755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.651705946s" Jan 13 20:45:24.987148 kubelet[2428]: W0113 20:45:24.987057 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:24.987148 kubelet[2428]: E0113 20:45:24.987115 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:25.123528 containerd[1563]: time="2025-01-13T20:45:25.123303955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:25.123528 containerd[1563]: time="2025-01-13T20:45:25.123405918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:25.123803 containerd[1563]: time="2025-01-13T20:45:25.123424648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.123803 containerd[1563]: time="2025-01-13T20:45:25.123561656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.125243 containerd[1563]: time="2025-01-13T20:45:25.122949955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:25.125243 containerd[1563]: time="2025-01-13T20:45:25.125038851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:25.125243 containerd[1563]: time="2025-01-13T20:45:25.125056559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.125243 containerd[1563]: time="2025-01-13T20:45:25.125173412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.130188 containerd[1563]: time="2025-01-13T20:45:25.129907129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:25.130188 containerd[1563]: time="2025-01-13T20:45:25.129975271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:25.130188 containerd[1563]: time="2025-01-13T20:45:25.129989991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.130188 containerd[1563]: time="2025-01-13T20:45:25.130135942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:25.216964 kubelet[2428]: E0113 20:45:25.216922 2428 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="3.2s" Jan 13 20:45:25.274907 containerd[1563]: time="2025-01-13T20:45:25.274855607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b0c2e2340a70bc4a3eedf7a002b1eed4481410834e6a0ec1e22293cfa94318b\"" Jan 13 20:45:25.276439 containerd[1563]: time="2025-01-13T20:45:25.276096558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b1b27f1502c3f173e7ff7098b2f17e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"23d47211a60ea50de146043c46d4dd41e866f50a4f070c04c26aaad9ac13bcdc\"" Jan 13 20:45:25.277205 kubelet[2428]: E0113 20:45:25.277049 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:25.277683 kubelet[2428]: E0113 20:45:25.277258 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:25.280533 containerd[1563]: time="2025-01-13T20:45:25.280502484Z" level=info msg="CreateContainer within sandbox \"2b0c2e2340a70bc4a3eedf7a002b1eed4481410834e6a0ec1e22293cfa94318b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:45:25.280648 containerd[1563]: time="2025-01-13T20:45:25.280608619Z" level=info msg="CreateContainer within sandbox \"23d47211a60ea50de146043c46d4dd41e866f50a4f070c04c26aaad9ac13bcdc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:45:25.283211 containerd[1563]: time="2025-01-13T20:45:25.283173252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0ef87aa14c12a951af3598f74382f80a0946bdb8919a0dd8d47445aeacd7379\"" Jan 13 20:45:25.284039 kubelet[2428]: E0113 20:45:25.284013 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:25.286970 containerd[1563]: time="2025-01-13T20:45:25.286900095Z" level=info msg="CreateContainer within sandbox \"d0ef87aa14c12a951af3598f74382f80a0946bdb8919a0dd8d47445aeacd7379\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:45:25.287061 kubelet[2428]: I0113 20:45:25.287039 2428 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:25.287521 kubelet[2428]: E0113 20:45:25.287477 2428 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Jan 13 20:45:25.357850 containerd[1563]: time="2025-01-13T20:45:25.357791391Z" level=info msg="CreateContainer within sandbox \"2b0c2e2340a70bc4a3eedf7a002b1eed4481410834e6a0ec1e22293cfa94318b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ed4f41e8eceb34c4729c31336d042d1075fb1287ad17ce17f1e6d971e621180\"" Jan 13 20:45:25.358611 containerd[1563]: time="2025-01-13T20:45:25.358585903Z" level=info msg="StartContainer for \"3ed4f41e8eceb34c4729c31336d042d1075fb1287ad17ce17f1e6d971e621180\"" Jan 13 20:45:25.388123 kubelet[2428]: W0113 20:45:25.387984 2428 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:25.388123 kubelet[2428]: E0113 20:45:25.388056 2428 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.149:6443: connect: connection refused Jan 13 20:45:25.403013 containerd[1563]: time="2025-01-13T20:45:25.402936202Z" level=info msg="CreateContainer within sandbox \"d0ef87aa14c12a951af3598f74382f80a0946bdb8919a0dd8d47445aeacd7379\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7dcf7565aa7396fb558d969aa310af2894b01e8ec75477d3ceb325bd64276aac\"" Jan 13 20:45:25.404000 containerd[1563]: time="2025-01-13T20:45:25.403960090Z" level=info msg="StartContainer for \"7dcf7565aa7396fb558d969aa310af2894b01e8ec75477d3ceb325bd64276aac\"" Jan 13 20:45:25.422821 containerd[1563]: time="2025-01-13T20:45:25.422768774Z" level=info msg="CreateContainer within sandbox \"23d47211a60ea50de146043c46d4dd41e866f50a4f070c04c26aaad9ac13bcdc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"703ac62b83b0466664ded536e0388d28613b84563a185f20691b2a9954bf28b5\"" Jan 13 20:45:25.423806 containerd[1563]: time="2025-01-13T20:45:25.423756294Z" level=info msg="StartContainer for \"703ac62b83b0466664ded536e0388d28613b84563a185f20691b2a9954bf28b5\"" Jan 13 20:45:25.461349 containerd[1563]: time="2025-01-13T20:45:25.461283629Z" level=info msg="StartContainer for \"3ed4f41e8eceb34c4729c31336d042d1075fb1287ad17ce17f1e6d971e621180\" returns successfully" Jan 13 20:45:25.548633 containerd[1563]: time="2025-01-13T20:45:25.548482344Z" level=info msg="StartContainer for \"7dcf7565aa7396fb558d969aa310af2894b01e8ec75477d3ceb325bd64276aac\" returns successfully" Jan 13 20:45:25.548633 containerd[1563]: time="2025-01-13T20:45:25.548586384Z" level=info msg="StartContainer for \"703ac62b83b0466664ded536e0388d28613b84563a185f20691b2a9954bf28b5\" returns successfully" Jan 13 20:45:26.225202 kubelet[2428]: E0113 20:45:26.224950 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:26.227526 kubelet[2428]: E0113 20:45:26.227263 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:26.229236 kubelet[2428]: E0113 20:45:26.229194 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:27.217468 kubelet[2428]: I0113 20:45:27.217405 2428 apiserver.go:52] "Watching apiserver" Jan 13 20:45:27.231879 kubelet[2428]: E0113 20:45:27.231844 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:27.232928 kubelet[2428]: E0113 20:45:27.232777 2428 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:27.279495 kubelet[2428]: I0113 20:45:27.279431 2428 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:45:27.481330 kubelet[2428]: E0113 20:45:27.481199 2428 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 20:45:27.833332 kubelet[2428]: E0113 20:45:27.833214 2428 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 20:45:28.335210 kubelet[2428]: E0113 20:45:28.335172 2428 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 13 20:45:28.420751 kubelet[2428]: E0113 20:45:28.420697 2428 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:45:28.489139 kubelet[2428]: I0113 20:45:28.489087 2428 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:28.496144 kubelet[2428]: I0113 20:45:28.495961 2428 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:45:29.774182 systemd[1]: Reloading requested from client PID 2711 ('systemctl') (unit session-7.scope)... Jan 13 20:45:29.774198 systemd[1]: Reloading... Jan 13 20:45:29.856410 zram_generator::config[2753]: No configuration found. Jan 13 20:45:29.974923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:45:30.057453 systemd[1]: Reloading finished in 282 ms. Jan 13 20:45:30.094797 kubelet[2428]: I0113 20:45:30.094755 2428 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:45:30.094823 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:30.112756 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:45:30.113179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:30.129628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:45:30.272471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:45:30.277039 (kubelet)[2805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:45:30.324517 kubelet[2805]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:30.324517 kubelet[2805]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:45:30.324517 kubelet[2805]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:45:30.324913 kubelet[2805]: I0113 20:45:30.324487 2805 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:45:30.330860 kubelet[2805]: I0113 20:45:30.330813 2805 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:45:30.330860 kubelet[2805]: I0113 20:45:30.330839 2805 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:45:30.331090 kubelet[2805]: I0113 20:45:30.331070 2805 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:45:30.332539 kubelet[2805]: I0113 20:45:30.332517 2805 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:45:30.334765 kubelet[2805]: I0113 20:45:30.334701 2805 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:45:30.344695 kubelet[2805]: I0113 20:45:30.344656 2805 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:45:30.345360 kubelet[2805]: I0113 20:45:30.345332 2805 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:45:30.345596 kubelet[2805]: I0113 20:45:30.345571 2805 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:45:30.345751 kubelet[2805]: I0113 20:45:30.345601 2805 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:45:30.345751 kubelet[2805]: I0113 20:45:30.345613 2805 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:45:30.345751 kubelet[2805]: I0113 20:45:30.345666 2805 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:30.345847 kubelet[2805]: I0113 20:45:30.345781 2805 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:45:30.345847 kubelet[2805]: I0113 20:45:30.345799 2805 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:45:30.345847 kubelet[2805]: I0113 20:45:30.345832 2805 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:45:30.345932 kubelet[2805]: I0113 20:45:30.345858 2805 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:45:30.347540 kubelet[2805]: I0113 20:45:30.346514 2805 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:45:30.347540 kubelet[2805]: I0113 20:45:30.346788 2805 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:45:30.347540 kubelet[2805]: I0113 20:45:30.347520 2805 server.go:1256] "Started kubelet" Jan 13 20:45:30.350403 kubelet[2805]: I0113 20:45:30.347795 2805 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:45:30.350403 kubelet[2805]: I0113 20:45:30.347877 2805 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:45:30.350403 kubelet[2805]: I0113 20:45:30.348184 2805 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:45:30.350403 kubelet[2805]: I0113 20:45:30.349039 2805 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:45:30.351140 kubelet[2805]: I0113 20:45:30.351116 2805 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:45:30.351283 kubelet[2805]: E0113 20:45:30.351262 2805 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:45:30.361530 kubelet[2805]: E0113 20:45:30.360976 2805 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:45:30.361530 kubelet[2805]: I0113 20:45:30.361035 2805 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:45:30.361530 kubelet[2805]: I0113 20:45:30.361066 2805 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:45:30.361733 kubelet[2805]: I0113 20:45:30.361563 2805 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:45:30.362434 kubelet[2805]: I0113 20:45:30.362296 2805 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:45:30.362434 kubelet[2805]: I0113 20:45:30.362404 2805 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:45:30.368162 kubelet[2805]: I0113 20:45:30.367400 2805 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:45:30.379430 kubelet[2805]: I0113 20:45:30.379165 2805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:45:30.380904 kubelet[2805]: I0113 20:45:30.380880 2805 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:45:30.380987 kubelet[2805]: I0113 20:45:30.380913 2805 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:45:30.380987 kubelet[2805]: I0113 20:45:30.380938 2805 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:45:30.381057 kubelet[2805]: E0113 20:45:30.381004 2805 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:45:30.424364 kubelet[2805]: I0113 20:45:30.424338 2805 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:45:30.424364 kubelet[2805]: I0113 20:45:30.424359 2805 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:45:30.424364 kubelet[2805]: I0113 20:45:30.424374 2805 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:45:30.424549 kubelet[2805]: I0113 20:45:30.424526 2805 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:45:30.424549 kubelet[2805]: I0113 20:45:30.424544 2805 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:45:30.424589 kubelet[2805]: I0113 20:45:30.424555 2805 policy_none.go:49] "None policy: Start" Jan 13 20:45:30.425209 kubelet[2805]: I0113 20:45:30.425186 2805 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:45:30.425245 kubelet[2805]: I0113 20:45:30.425221 2805 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:45:30.425430 kubelet[2805]: I0113 20:45:30.425416 2805 state_mem.go:75] "Updated machine memory state" Jan 13 20:45:30.427085 kubelet[2805]: I0113 20:45:30.427068 2805 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:45:30.427351 kubelet[2805]: I0113 20:45:30.427329 2805 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:45:30.466254 kubelet[2805]: I0113 20:45:30.466227 2805 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:45:30.481536 kubelet[2805]: I0113 20:45:30.481503 2805 topology_manager.go:215] "Topology Admit Handler" podUID="4b1b27f1502c3f173e7ff7098b2f17e3" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:45:30.481661 kubelet[2805]: I0113 20:45:30.481599 2805 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:45:30.481661 kubelet[2805]: I0113 20:45:30.481628 2805 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:45:30.552125 kubelet[2805]: I0113 20:45:30.551894 2805 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:45:30.552125 kubelet[2805]: I0113 20:45:30.551991 2805 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:45:30.562669 kubelet[2805]: I0113 20:45:30.562617 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b1b27f1502c3f173e7ff7098b2f17e3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b1b27f1502c3f173e7ff7098b2f17e3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:30.562809 kubelet[2805]: I0113 20:45:30.562686 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:45:30.562809 kubelet[2805]: I0113 20:45:30.562713 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b1b27f1502c3f173e7ff7098b2f17e3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b1b27f1502c3f173e7ff7098b2f17e3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:30.562809 kubelet[2805]: I0113 20:45:30.562737 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:30.562809 kubelet[2805]: I0113 20:45:30.562768 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:30.562809 kubelet[2805]: I0113 20:45:30.562793 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:30.562984 kubelet[2805]: I0113 20:45:30.562816 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:30.562984 kubelet[2805]: I0113 20:45:30.562842 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:45:30.562984 kubelet[2805]: I0113 20:45:30.562869 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b1b27f1502c3f173e7ff7098b2f17e3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b1b27f1502c3f173e7ff7098b2f17e3\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:45:30.848664 kubelet[2805]: E0113 20:45:30.848622 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:30.852308 kubelet[2805]: E0113 20:45:30.852280 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:30.852627 kubelet[2805]: E0113 20:45:30.852596 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:31.347131 kubelet[2805]: I0113 20:45:31.346958 2805 apiserver.go:52] "Watching apiserver" Jan 13 20:45:31.362680 kubelet[2805]: I0113 20:45:31.362627 2805 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:45:31.398070 kubelet[2805]: E0113 20:45:31.396667 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:31.398070 kubelet[2805]: E0113 20:45:31.396683 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:31.399977 kubelet[2805]: E0113 20:45:31.399902 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:31.424592 kubelet[2805]: I0113 20:45:31.424545 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.424493958 podStartE2EDuration="1.424493958s" podCreationTimestamp="2025-01-13 20:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:31.423664225 +0000 UTC m=+1.142512172" watchObservedRunningTime="2025-01-13 20:45:31.424493958 +0000 UTC m=+1.143341895" Jan 13 20:45:31.442479 kubelet[2805]: I0113 20:45:31.442426 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.442340891 podStartE2EDuration="1.442340891s" podCreationTimestamp="2025-01-13 20:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:31.439509298 +0000 UTC m=+1.158357245" watchObservedRunningTime="2025-01-13 20:45:31.442340891 +0000 UTC m=+1.161188838" Jan 13 20:45:31.473105 kubelet[2805]: I0113 20:45:31.472873 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.472816224 podStartE2EDuration="1.472816224s" podCreationTimestamp="2025-01-13 20:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:31.452454638 +0000 UTC m=+1.171302585" watchObservedRunningTime="2025-01-13 20:45:31.472816224 +0000 UTC m=+1.191664181" Jan 13 20:45:31.895304 update_engine[1546]: I20250113 20:45:31.894280 1546 update_attempter.cc:509] Updating boot flags... Jan 13 20:45:31.969415 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2856) Jan 13 20:45:32.034443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2858) Jan 13 20:45:32.398802 kubelet[2805]: E0113 20:45:32.398749 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:32.401440 kubelet[2805]: E0113 20:45:32.401419 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:34.971847 sudo[1770]: pam_unix(sudo:session): session closed for user root Jan 13 20:45:34.973591 sshd[1769]: Connection closed by 10.0.0.1 port 33122 Jan 13 20:45:34.974649 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jan 13 20:45:34.979503 systemd[1]: sshd@6-10.0.0.149:22-10.0.0.1:33122.service: Deactivated successfully. Jan 13 20:45:34.982354 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:45:34.983092 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:45:34.983933 systemd-logind[1542]: Removed session 7. Jan 13 20:45:36.307331 kubelet[2805]: E0113 20:45:36.307287 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:36.404867 kubelet[2805]: E0113 20:45:36.404844 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:36.512667 kubelet[2805]: E0113 20:45:36.512635 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:37.406605 kubelet[2805]: E0113 20:45:37.406307 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:37.406605 kubelet[2805]: E0113 20:45:37.406573 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:41.546554 kubelet[2805]: E0113 20:45:41.546523 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:43.426838 kubelet[2805]: I0113 20:45:43.426608 2805 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:45:43.427864 containerd[1563]: time="2025-01-13T20:45:43.427729135Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:45:43.428225 kubelet[2805]: I0113 20:45:43.428060 2805 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:45:44.696165 kubelet[2805]: I0113 20:45:44.694255 2805 topology_manager.go:215] "Topology Admit Handler" podUID="4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42" podNamespace="kube-system" podName="kube-proxy-kjcrx" Jan 13 20:45:44.724452 kubelet[2805]: I0113 20:45:44.724407 2805 topology_manager.go:215] "Topology Admit Handler" podUID="52b8b866-2166-4542-8189-edac4ae16748" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-xgqqk" Jan 13 20:45:44.849509 kubelet[2805]: I0113 20:45:44.849449 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42-kube-proxy\") pod \"kube-proxy-kjcrx\" (UID: \"4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42\") " pod="kube-system/kube-proxy-kjcrx" Jan 13 20:45:44.849509 kubelet[2805]: I0113 20:45:44.849513 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42-xtables-lock\") pod \"kube-proxy-kjcrx\" (UID: \"4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42\") " pod="kube-system/kube-proxy-kjcrx" Jan 13 20:45:44.849659 kubelet[2805]: I0113 20:45:44.849546 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52b8b866-2166-4542-8189-edac4ae16748-var-lib-calico\") pod \"tigera-operator-c7ccbd65-xgqqk\" (UID: \"52b8b866-2166-4542-8189-edac4ae16748\") " pod="tigera-operator/tigera-operator-c7ccbd65-xgqqk" Jan 13 20:45:44.849659 kubelet[2805]: I0113 20:45:44.849575 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gm9c\" (UniqueName: \"kubernetes.io/projected/4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42-kube-api-access-7gm9c\") pod \"kube-proxy-kjcrx\" (UID: \"4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42\") " pod="kube-system/kube-proxy-kjcrx" Jan 13 20:45:44.849659 kubelet[2805]: I0113 20:45:44.849608 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42-lib-modules\") pod \"kube-proxy-kjcrx\" (UID: \"4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42\") " pod="kube-system/kube-proxy-kjcrx" Jan 13 20:45:44.849659 kubelet[2805]: I0113 20:45:44.849635 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pbpd\" (UniqueName: \"kubernetes.io/projected/52b8b866-2166-4542-8189-edac4ae16748-kube-api-access-6pbpd\") pod \"tigera-operator-c7ccbd65-xgqqk\" (UID: \"52b8b866-2166-4542-8189-edac4ae16748\") " pod="tigera-operator/tigera-operator-c7ccbd65-xgqqk" Jan 13 20:45:45.003473 kubelet[2805]: E0113 20:45:45.003337 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:45.004144 containerd[1563]: time="2025-01-13T20:45:45.004083214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjcrx,Uid:4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42,Namespace:kube-system,Attempt:0,}" Jan 13 20:45:45.028516 containerd[1563]: time="2025-01-13T20:45:45.028363247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:45.028619 containerd[1563]: time="2025-01-13T20:45:45.028519790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:45.029295 containerd[1563]: time="2025-01-13T20:45:45.028546298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:45.029295 containerd[1563]: time="2025-01-13T20:45:45.029239291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:45.029553 containerd[1563]: time="2025-01-13T20:45:45.029486000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-xgqqk,Uid:52b8b866-2166-4542-8189-edac4ae16748,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:45:45.055495 containerd[1563]: time="2025-01-13T20:45:45.055340382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:45.055495 containerd[1563]: time="2025-01-13T20:45:45.055446123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:45.055495 containerd[1563]: time="2025-01-13T20:45:45.055457047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:45.055696 containerd[1563]: time="2025-01-13T20:45:45.055536580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:45.067644 containerd[1563]: time="2025-01-13T20:45:45.066959431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjcrx,Uid:4ac749c1-d7bd-4076-a7ec-1bdf1ba99d42,Namespace:kube-system,Attempt:0,} returns sandbox id \"c13df7f62514460a2634107adde5c52ecd596ea08fc061f1d2fa836db4779d34\"" Jan 13 20:45:45.068045 kubelet[2805]: E0113 20:45:45.068026 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:45.070026 containerd[1563]: time="2025-01-13T20:45:45.069989499Z" level=info msg="CreateContainer within sandbox \"c13df7f62514460a2634107adde5c52ecd596ea08fc061f1d2fa836db4779d34\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:45:45.086851 containerd[1563]: time="2025-01-13T20:45:45.086811302Z" level=info msg="CreateContainer within sandbox \"c13df7f62514460a2634107adde5c52ecd596ea08fc061f1d2fa836db4779d34\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d96d994290727b1f2f495a6cba84582f7d07c29f7e67e02ec2719e4d0b83a30e\"" Jan 13 20:45:45.087973 containerd[1563]: time="2025-01-13T20:45:45.087938254Z" level=info msg="StartContainer for \"d96d994290727b1f2f495a6cba84582f7d07c29f7e67e02ec2719e4d0b83a30e\"" Jan 13 20:45:45.110826 containerd[1563]: time="2025-01-13T20:45:45.110746813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-xgqqk,Uid:52b8b866-2166-4542-8189-edac4ae16748,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b08ec2e2724f19b1f9f63b93a7aa09a8ee36939135a0595ae0f02c0152a5be5f\"" Jan 13 20:45:45.112942 containerd[1563]: time="2025-01-13T20:45:45.112891328Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:45:45.238001 containerd[1563]: time="2025-01-13T20:45:45.237950898Z" level=info msg="StartContainer for \"d96d994290727b1f2f495a6cba84582f7d07c29f7e67e02ec2719e4d0b83a30e\" returns successfully" Jan 13 20:45:45.421895 kubelet[2805]: E0113 20:45:45.421235 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:45.515114 kubelet[2805]: I0113 20:45:45.515074 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kjcrx" podStartSLOduration=1.5150280120000001 podStartE2EDuration="1.515028012s" podCreationTimestamp="2025-01-13 20:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:45:45.514760188 +0000 UTC m=+15.233608145" watchObservedRunningTime="2025-01-13 20:45:45.515028012 +0000 UTC m=+15.233875959" Jan 13 20:45:48.231094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121333406.mount: Deactivated successfully. Jan 13 20:45:48.974737 containerd[1563]: time="2025-01-13T20:45:48.974654241Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:48.975523 containerd[1563]: time="2025-01-13T20:45:48.975373547Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21763713" Jan 13 20:45:48.977031 containerd[1563]: time="2025-01-13T20:45:48.976992550Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:48.981506 containerd[1563]: time="2025-01-13T20:45:48.981453925Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:48.982554 containerd[1563]: time="2025-01-13T20:45:48.982524078Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.869598055s" Jan 13 20:45:48.982607 containerd[1563]: time="2025-01-13T20:45:48.982560907Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 13 20:45:48.984447 containerd[1563]: time="2025-01-13T20:45:48.984415646Z" level=info msg="CreateContainer within sandbox \"b08ec2e2724f19b1f9f63b93a7aa09a8ee36939135a0595ae0f02c0152a5be5f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:45:48.997530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446133610.mount: Deactivated successfully. Jan 13 20:45:49.000000 containerd[1563]: time="2025-01-13T20:45:48.999951702Z" level=info msg="CreateContainer within sandbox \"b08ec2e2724f19b1f9f63b93a7aa09a8ee36939135a0595ae0f02c0152a5be5f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6cede1d689e602c326b8921ee22b83548c8b38e7eb0e2b26ff519147fe971890\"" Jan 13 20:45:49.000544 containerd[1563]: time="2025-01-13T20:45:49.000520744Z" level=info msg="StartContainer for \"6cede1d689e602c326b8921ee22b83548c8b38e7eb0e2b26ff519147fe971890\"" Jan 13 20:45:49.058955 containerd[1563]: time="2025-01-13T20:45:49.058900138Z" level=info msg="StartContainer for \"6cede1d689e602c326b8921ee22b83548c8b38e7eb0e2b26ff519147fe971890\" returns successfully" Jan 13 20:45:49.441980 kubelet[2805]: I0113 20:45:49.441905 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-xgqqk" podStartSLOduration=1.5713256260000001 podStartE2EDuration="5.441855564s" podCreationTimestamp="2025-01-13 20:45:44 +0000 UTC" firstStartedPulling="2025-01-13 20:45:45.112346337 +0000 UTC m=+14.831194284" lastFinishedPulling="2025-01-13 20:45:48.982876285 +0000 UTC m=+18.701724222" observedRunningTime="2025-01-13 20:45:49.439853668 +0000 UTC m=+19.158701615" watchObservedRunningTime="2025-01-13 20:45:49.441855564 +0000 UTC m=+19.160703521" Jan 13 20:45:52.067777 kubelet[2805]: I0113 20:45:52.067728 2805 topology_manager.go:215] "Topology Admit Handler" podUID="0dc2ea09-1b07-4b26-bcff-208b8ad2b860" podNamespace="calico-system" podName="calico-typha-ddd688c9d-hlqt8" Jan 13 20:45:52.174732 kubelet[2805]: I0113 20:45:52.174679 2805 topology_manager.go:215] "Topology Admit Handler" podUID="45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4" podNamespace="calico-system" podName="calico-node-qwhcl" Jan 13 20:45:52.204691 kubelet[2805]: I0113 20:45:52.204613 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0dc2ea09-1b07-4b26-bcff-208b8ad2b860-typha-certs\") pod \"calico-typha-ddd688c9d-hlqt8\" (UID: \"0dc2ea09-1b07-4b26-bcff-208b8ad2b860\") " pod="calico-system/calico-typha-ddd688c9d-hlqt8" Jan 13 20:45:52.204691 kubelet[2805]: I0113 20:45:52.204676 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22r6z\" (UniqueName: \"kubernetes.io/projected/0dc2ea09-1b07-4b26-bcff-208b8ad2b860-kube-api-access-22r6z\") pod \"calico-typha-ddd688c9d-hlqt8\" (UID: \"0dc2ea09-1b07-4b26-bcff-208b8ad2b860\") " pod="calico-system/calico-typha-ddd688c9d-hlqt8" Jan 13 20:45:52.204691 kubelet[2805]: I0113 20:45:52.204703 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dc2ea09-1b07-4b26-bcff-208b8ad2b860-tigera-ca-bundle\") pod \"calico-typha-ddd688c9d-hlqt8\" (UID: \"0dc2ea09-1b07-4b26-bcff-208b8ad2b860\") " pod="calico-system/calico-typha-ddd688c9d-hlqt8" Jan 13 20:45:52.289590 kubelet[2805]: I0113 20:45:52.289045 2805 topology_manager.go:215] "Topology Admit Handler" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" podNamespace="calico-system" podName="csi-node-driver-2wkvn" Jan 13 20:45:52.292149 kubelet[2805]: E0113 20:45:52.290851 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:45:52.305183 kubelet[2805]: I0113 20:45:52.305134 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-cni-log-dir\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.305352 kubelet[2805]: I0113 20:45:52.305205 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-node-certs\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.305352 kubelet[2805]: I0113 20:45:52.305224 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-var-lib-calico\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.305352 kubelet[2805]: I0113 20:45:52.305241 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-policysync\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.305352 kubelet[2805]: I0113 20:45:52.305262 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-tigera-ca-bundle\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.305352 kubelet[2805]: I0113 20:45:52.305279 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-var-run-calico\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.306121 kubelet[2805]: I0113 20:45:52.305305 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-cni-net-dir\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.306121 kubelet[2805]: I0113 20:45:52.305324 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-flexvol-driver-host\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.306121 kubelet[2805]: I0113 20:45:52.305342 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2l4m\" (UniqueName: \"kubernetes.io/projected/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-kube-api-access-l2l4m\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.306121 kubelet[2805]: I0113 20:45:52.305361 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-xtables-lock\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.306121 kubelet[2805]: I0113 20:45:52.305393 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-lib-modules\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.306340 kubelet[2805]: I0113 20:45:52.305412 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4-cni-bin-dir\") pod \"calico-node-qwhcl\" (UID: \"45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4\") " pod="calico-system/calico-node-qwhcl" Jan 13 20:45:52.372955 kubelet[2805]: E0113 20:45:52.372802 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:52.373451 containerd[1563]: time="2025-01-13T20:45:52.373405809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ddd688c9d-hlqt8,Uid:0dc2ea09-1b07-4b26-bcff-208b8ad2b860,Namespace:calico-system,Attempt:0,}" Jan 13 20:45:52.403184 containerd[1563]: time="2025-01-13T20:45:52.403065116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:52.403429 containerd[1563]: time="2025-01-13T20:45:52.403219292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:52.403429 containerd[1563]: time="2025-01-13T20:45:52.403260299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:52.403678 containerd[1563]: time="2025-01-13T20:45:52.403475054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:52.409347 kubelet[2805]: I0113 20:45:52.409246 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/31e31ef7-4073-4fa6-8a57-6102fb32a16b-socket-dir\") pod \"csi-node-driver-2wkvn\" (UID: \"31e31ef7-4073-4fa6-8a57-6102fb32a16b\") " pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:45:52.409347 kubelet[2805]: I0113 20:45:52.409338 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/31e31ef7-4073-4fa6-8a57-6102fb32a16b-varrun\") pod \"csi-node-driver-2wkvn\" (UID: \"31e31ef7-4073-4fa6-8a57-6102fb32a16b\") " pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:45:52.411458 kubelet[2805]: I0113 20:45:52.409373 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8zhq\" (UniqueName: \"kubernetes.io/projected/31e31ef7-4073-4fa6-8a57-6102fb32a16b-kube-api-access-f8zhq\") pod \"csi-node-driver-2wkvn\" (UID: \"31e31ef7-4073-4fa6-8a57-6102fb32a16b\") " pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:45:52.411458 kubelet[2805]: I0113 20:45:52.409496 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/31e31ef7-4073-4fa6-8a57-6102fb32a16b-registration-dir\") pod \"csi-node-driver-2wkvn\" (UID: \"31e31ef7-4073-4fa6-8a57-6102fb32a16b\") " pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:45:52.411458 kubelet[2805]: I0113 20:45:52.409589 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31e31ef7-4073-4fa6-8a57-6102fb32a16b-kubelet-dir\") pod \"csi-node-driver-2wkvn\" (UID: \"31e31ef7-4073-4fa6-8a57-6102fb32a16b\") " pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:45:52.419194 kubelet[2805]: E0113 20:45:52.418802 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.419194 kubelet[2805]: W0113 20:45:52.418845 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.419194 kubelet[2805]: E0113 20:45:52.418885 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.424798 kubelet[2805]: E0113 20:45:52.424734 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.424798 kubelet[2805]: W0113 20:45:52.424754 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.424798 kubelet[2805]: E0113 20:45:52.424788 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.483447 containerd[1563]: time="2025-01-13T20:45:52.483401439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ddd688c9d-hlqt8,Uid:0dc2ea09-1b07-4b26-bcff-208b8ad2b860,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9d9154bd09c3bb7d81021bda8efa0bbe1885ac75f1e6957ef673c36240c93c3\"" Jan 13 20:45:52.484258 kubelet[2805]: E0113 20:45:52.484237 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:52.485227 kubelet[2805]: E0113 20:45:52.484999 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:52.485680 containerd[1563]: time="2025-01-13T20:45:52.485446175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:45:52.485680 containerd[1563]: time="2025-01-13T20:45:52.485601463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qwhcl,Uid:45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4,Namespace:calico-system,Attempt:0,}" Jan 13 20:45:52.510028 kubelet[2805]: E0113 20:45:52.509998 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.510028 kubelet[2805]: W0113 20:45:52.510020 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.510202 kubelet[2805]: E0113 20:45:52.510041 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.510266 kubelet[2805]: E0113 20:45:52.510255 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.510266 kubelet[2805]: W0113 20:45:52.510264 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.510334 kubelet[2805]: E0113 20:45:52.510279 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.510533 kubelet[2805]: E0113 20:45:52.510519 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.510533 kubelet[2805]: W0113 20:45:52.510531 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.510640 kubelet[2805]: E0113 20:45:52.510546 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.510788 kubelet[2805]: E0113 20:45:52.510771 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.510843 kubelet[2805]: W0113 20:45:52.510787 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.510843 kubelet[2805]: E0113 20:45:52.510810 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.511043 kubelet[2805]: E0113 20:45:52.511026 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.511043 kubelet[2805]: W0113 20:45:52.511039 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.511141 kubelet[2805]: E0113 20:45:52.511057 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.511281 kubelet[2805]: E0113 20:45:52.511267 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.511281 kubelet[2805]: W0113 20:45:52.511279 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.511357 kubelet[2805]: E0113 20:45:52.511298 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.511581 kubelet[2805]: E0113 20:45:52.511566 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.511581 kubelet[2805]: W0113 20:45:52.511579 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.511684 kubelet[2805]: E0113 20:45:52.511607 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.511820 kubelet[2805]: E0113 20:45:52.511805 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.511820 kubelet[2805]: W0113 20:45:52.511818 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.511917 kubelet[2805]: E0113 20:45:52.511837 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.512065 kubelet[2805]: E0113 20:45:52.512039 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.512114 kubelet[2805]: W0113 20:45:52.512065 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.512114 kubelet[2805]: E0113 20:45:52.512103 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.512295 kubelet[2805]: E0113 20:45:52.512279 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.512295 kubelet[2805]: W0113 20:45:52.512291 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.512408 kubelet[2805]: E0113 20:45:52.512318 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.512522 kubelet[2805]: E0113 20:45:52.512508 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.512576 kubelet[2805]: W0113 20:45:52.512522 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.512676 kubelet[2805]: E0113 20:45:52.512662 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.512811 kubelet[2805]: E0113 20:45:52.512798 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.512840 kubelet[2805]: W0113 20:45:52.512811 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.512840 kubelet[2805]: E0113 20:45:52.512832 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.513072 kubelet[2805]: E0113 20:45:52.513056 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.513072 kubelet[2805]: W0113 20:45:52.513070 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.513176 kubelet[2805]: E0113 20:45:52.513091 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.513419 kubelet[2805]: E0113 20:45:52.513359 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.513419 kubelet[2805]: W0113 20:45:52.513390 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.513419 kubelet[2805]: E0113 20:45:52.513410 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.513672 kubelet[2805]: E0113 20:45:52.513656 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.513672 kubelet[2805]: W0113 20:45:52.513667 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.513754 kubelet[2805]: E0113 20:45:52.513729 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.513891 kubelet[2805]: E0113 20:45:52.513878 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.513891 kubelet[2805]: W0113 20:45:52.513889 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.513968 kubelet[2805]: E0113 20:45:52.513939 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.514114 kubelet[2805]: E0113 20:45:52.514101 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.514114 kubelet[2805]: W0113 20:45:52.514113 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.514191 kubelet[2805]: E0113 20:45:52.514141 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.514345 kubelet[2805]: E0113 20:45:52.514330 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.514345 kubelet[2805]: W0113 20:45:52.514344 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.514481 kubelet[2805]: E0113 20:45:52.514366 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.514646 kubelet[2805]: E0113 20:45:52.514633 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.514646 kubelet[2805]: W0113 20:45:52.514645 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.514722 kubelet[2805]: E0113 20:45:52.514662 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.514898 kubelet[2805]: E0113 20:45:52.514885 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.514898 kubelet[2805]: W0113 20:45:52.514897 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.514960 kubelet[2805]: E0113 20:45:52.514915 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.515185 kubelet[2805]: E0113 20:45:52.515171 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.515185 kubelet[2805]: W0113 20:45:52.515183 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.515277 kubelet[2805]: E0113 20:45:52.515203 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.515476 kubelet[2805]: E0113 20:45:52.515448 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.515476 kubelet[2805]: W0113 20:45:52.515460 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.515743 kubelet[2805]: E0113 20:45:52.515683 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.515743 kubelet[2805]: E0113 20:45:52.515708 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.515743 kubelet[2805]: W0113 20:45:52.515737 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.515998 kubelet[2805]: E0113 20:45:52.515753 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.516136 kubelet[2805]: E0113 20:45:52.516123 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.516136 kubelet[2805]: W0113 20:45:52.516133 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.516226 kubelet[2805]: E0113 20:45:52.516148 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.516364 kubelet[2805]: E0113 20:45:52.516331 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.516364 kubelet[2805]: W0113 20:45:52.516341 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.516364 kubelet[2805]: E0113 20:45:52.516351 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.525212 kubelet[2805]: E0113 20:45:52.525180 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:52.525212 kubelet[2805]: W0113 20:45:52.525202 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:52.525406 kubelet[2805]: E0113 20:45:52.525227 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:52.528026 containerd[1563]: time="2025-01-13T20:45:52.527776956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:45:52.528026 containerd[1563]: time="2025-01-13T20:45:52.527931342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:45:52.528665 containerd[1563]: time="2025-01-13T20:45:52.528594967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:52.528787 containerd[1563]: time="2025-01-13T20:45:52.528744081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:45:52.583140 containerd[1563]: time="2025-01-13T20:45:52.583092834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qwhcl,Uid:45d7b2e9-ed2f-4ddc-b2c8-136aa03ccad4,Namespace:calico-system,Attempt:0,} returns sandbox id \"66a71c36a3a0d7e1ec3adcd026b9d7ae37c218f10c61200bb91cc41cc16124f2\"" Jan 13 20:45:52.583732 kubelet[2805]: E0113 20:45:52.583711 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:54.381717 kubelet[2805]: E0113 20:45:54.381662 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:45:55.528881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548301231.mount: Deactivated successfully. Jan 13 20:45:56.390224 kubelet[2805]: E0113 20:45:56.390131 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:45:56.562510 containerd[1563]: time="2025-01-13T20:45:56.562453963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:56.563054 containerd[1563]: time="2025-01-13T20:45:56.562997669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 13 20:45:56.564485 containerd[1563]: time="2025-01-13T20:45:56.564414526Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:56.567883 containerd[1563]: time="2025-01-13T20:45:56.567848576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:56.568595 containerd[1563]: time="2025-01-13T20:45:56.568566275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 4.083089346s" Jan 13 20:45:56.568636 containerd[1563]: time="2025-01-13T20:45:56.568600175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 20:45:56.570555 containerd[1563]: time="2025-01-13T20:45:56.570531838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:45:56.580001 containerd[1563]: time="2025-01-13T20:45:56.579949411Z" level=info msg="CreateContainer within sandbox \"c9d9154bd09c3bb7d81021bda8efa0bbe1885ac75f1e6957ef673c36240c93c3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:45:56.595541 containerd[1563]: time="2025-01-13T20:45:56.595486067Z" level=info msg="CreateContainer within sandbox \"c9d9154bd09c3bb7d81021bda8efa0bbe1885ac75f1e6957ef673c36240c93c3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"781d21f8f9fbd642d09a0bcf04684347cabd5c69b018fc07140995585ac8923d\"" Jan 13 20:45:56.596205 containerd[1563]: time="2025-01-13T20:45:56.596171819Z" level=info msg="StartContainer for \"781d21f8f9fbd642d09a0bcf04684347cabd5c69b018fc07140995585ac8923d\"" Jan 13 20:45:56.675606 containerd[1563]: time="2025-01-13T20:45:56.675549764Z" level=info msg="StartContainer for \"781d21f8f9fbd642d09a0bcf04684347cabd5c69b018fc07140995585ac8923d\" returns successfully" Jan 13 20:45:57.446902 kubelet[2805]: E0113 20:45:57.446860 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:57.501435 kubelet[2805]: I0113 20:45:57.501164 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-ddd688c9d-hlqt8" podStartSLOduration=1.417260823 podStartE2EDuration="5.501125677s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:45:52.485094742 +0000 UTC m=+22.203942689" lastFinishedPulling="2025-01-13 20:45:56.568959576 +0000 UTC m=+26.287807543" observedRunningTime="2025-01-13 20:45:57.501052905 +0000 UTC m=+27.219900852" watchObservedRunningTime="2025-01-13 20:45:57.501125677 +0000 UTC m=+27.219973624" Jan 13 20:45:57.547252 kubelet[2805]: E0113 20:45:57.547202 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.547252 kubelet[2805]: W0113 20:45:57.547233 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.547252 kubelet[2805]: E0113 20:45:57.547257 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.547532 kubelet[2805]: E0113 20:45:57.547505 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.547532 kubelet[2805]: W0113 20:45:57.547520 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.547532 kubelet[2805]: E0113 20:45:57.547533 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.547850 kubelet[2805]: E0113 20:45:57.547828 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.547850 kubelet[2805]: W0113 20:45:57.547840 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.547850 kubelet[2805]: E0113 20:45:57.547853 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.548076 kubelet[2805]: E0113 20:45:57.548055 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.548076 kubelet[2805]: W0113 20:45:57.548066 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.548157 kubelet[2805]: E0113 20:45:57.548080 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.548309 kubelet[2805]: E0113 20:45:57.548288 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.548309 kubelet[2805]: W0113 20:45:57.548300 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.548435 kubelet[2805]: E0113 20:45:57.548313 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.548564 kubelet[2805]: E0113 20:45:57.548530 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.548564 kubelet[2805]: W0113 20:45:57.548541 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.548564 kubelet[2805]: E0113 20:45:57.548554 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.548792 kubelet[2805]: E0113 20:45:57.548777 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.548792 kubelet[2805]: W0113 20:45:57.548788 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.548863 kubelet[2805]: E0113 20:45:57.548801 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.549029 kubelet[2805]: E0113 20:45:57.549011 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.549029 kubelet[2805]: W0113 20:45:57.549024 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.549117 kubelet[2805]: E0113 20:45:57.549039 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.549284 kubelet[2805]: E0113 20:45:57.549269 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.549284 kubelet[2805]: W0113 20:45:57.549279 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.549356 kubelet[2805]: E0113 20:45:57.549293 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.549513 kubelet[2805]: E0113 20:45:57.549498 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.549513 kubelet[2805]: W0113 20:45:57.549508 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.549602 kubelet[2805]: E0113 20:45:57.549521 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.549900 kubelet[2805]: E0113 20:45:57.549880 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.549900 kubelet[2805]: W0113 20:45:57.549894 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.549996 kubelet[2805]: E0113 20:45:57.549914 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.550163 kubelet[2805]: E0113 20:45:57.550149 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.550163 kubelet[2805]: W0113 20:45:57.550159 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.550235 kubelet[2805]: E0113 20:45:57.550172 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.550404 kubelet[2805]: E0113 20:45:57.550390 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.550404 kubelet[2805]: W0113 20:45:57.550402 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.550492 kubelet[2805]: E0113 20:45:57.550415 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.550643 kubelet[2805]: E0113 20:45:57.550628 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.550643 kubelet[2805]: W0113 20:45:57.550638 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.550720 kubelet[2805]: E0113 20:45:57.550652 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.550875 kubelet[2805]: E0113 20:45:57.550859 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.550875 kubelet[2805]: W0113 20:45:57.550869 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.550944 kubelet[2805]: E0113 20:45:57.550882 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.650759 kubelet[2805]: E0113 20:45:57.650716 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.650759 kubelet[2805]: W0113 20:45:57.650746 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.650930 kubelet[2805]: E0113 20:45:57.650773 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.651152 kubelet[2805]: E0113 20:45:57.651126 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.651184 kubelet[2805]: W0113 20:45:57.651153 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.651209 kubelet[2805]: E0113 20:45:57.651185 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.651513 kubelet[2805]: E0113 20:45:57.651497 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.651513 kubelet[2805]: W0113 20:45:57.651508 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.651585 kubelet[2805]: E0113 20:45:57.651524 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.651812 kubelet[2805]: E0113 20:45:57.651794 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.651846 kubelet[2805]: W0113 20:45:57.651811 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.651846 kubelet[2805]: E0113 20:45:57.651835 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.652116 kubelet[2805]: E0113 20:45:57.652093 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.652116 kubelet[2805]: W0113 20:45:57.652108 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.652176 kubelet[2805]: E0113 20:45:57.652128 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.652425 kubelet[2805]: E0113 20:45:57.652403 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.652425 kubelet[2805]: W0113 20:45:57.652418 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.652480 kubelet[2805]: E0113 20:45:57.652441 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.652748 kubelet[2805]: E0113 20:45:57.652726 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.652748 kubelet[2805]: W0113 20:45:57.652741 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.652808 kubelet[2805]: E0113 20:45:57.652782 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.652995 kubelet[2805]: E0113 20:45:57.652971 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.652995 kubelet[2805]: W0113 20:45:57.652987 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.653051 kubelet[2805]: E0113 20:45:57.653021 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.653238 kubelet[2805]: E0113 20:45:57.653213 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.653238 kubelet[2805]: W0113 20:45:57.653228 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.653286 kubelet[2805]: E0113 20:45:57.653250 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.653509 kubelet[2805]: E0113 20:45:57.653492 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.653509 kubelet[2805]: W0113 20:45:57.653506 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.653566 kubelet[2805]: E0113 20:45:57.653528 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.653816 kubelet[2805]: E0113 20:45:57.653784 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.653816 kubelet[2805]: W0113 20:45:57.653795 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.653864 kubelet[2805]: E0113 20:45:57.653817 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.654088 kubelet[2805]: E0113 20:45:57.654071 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.654088 kubelet[2805]: W0113 20:45:57.654085 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.654142 kubelet[2805]: E0113 20:45:57.654110 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.654413 kubelet[2805]: E0113 20:45:57.654396 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.654413 kubelet[2805]: W0113 20:45:57.654412 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.654474 kubelet[2805]: E0113 20:45:57.654436 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.654655 kubelet[2805]: E0113 20:45:57.654638 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.654655 kubelet[2805]: W0113 20:45:57.654650 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.654731 kubelet[2805]: E0113 20:45:57.654685 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.654882 kubelet[2805]: E0113 20:45:57.654868 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.654882 kubelet[2805]: W0113 20:45:57.654879 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.654951 kubelet[2805]: E0113 20:45:57.654923 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.655096 kubelet[2805]: E0113 20:45:57.655082 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.655096 kubelet[2805]: W0113 20:45:57.655094 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.655238 kubelet[2805]: E0113 20:45:57.655111 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.655339 kubelet[2805]: E0113 20:45:57.655325 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.655339 kubelet[2805]: W0113 20:45:57.655336 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.655415 kubelet[2805]: E0113 20:45:57.655348 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:57.655734 kubelet[2805]: E0113 20:45:57.655711 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:57.655734 kubelet[2805]: W0113 20:45:57.655724 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:57.655800 kubelet[2805]: E0113 20:45:57.655737 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.382244 kubelet[2805]: E0113 20:45:58.382194 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:45:58.448108 kubelet[2805]: I0113 20:45:58.448052 2805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:45:58.448800 kubelet[2805]: E0113 20:45:58.448774 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:45:58.456132 kubelet[2805]: E0113 20:45:58.456115 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.456132 kubelet[2805]: W0113 20:45:58.456129 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.456132 kubelet[2805]: E0113 20:45:58.456146 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.456331 kubelet[2805]: E0113 20:45:58.456316 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.456331 kubelet[2805]: W0113 20:45:58.456329 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.456389 kubelet[2805]: E0113 20:45:58.456342 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.456563 kubelet[2805]: E0113 20:45:58.456548 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.456563 kubelet[2805]: W0113 20:45:58.456560 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.456621 kubelet[2805]: E0113 20:45:58.456572 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.456767 kubelet[2805]: E0113 20:45:58.456752 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.456767 kubelet[2805]: W0113 20:45:58.456764 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.456817 kubelet[2805]: E0113 20:45:58.456777 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.456961 kubelet[2805]: E0113 20:45:58.456947 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.456961 kubelet[2805]: W0113 20:45:58.456959 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.457005 kubelet[2805]: E0113 20:45:58.456972 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.457151 kubelet[2805]: E0113 20:45:58.457136 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.457151 kubelet[2805]: W0113 20:45:58.457148 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.457194 kubelet[2805]: E0113 20:45:58.457160 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.457335 kubelet[2805]: E0113 20:45:58.457320 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.457335 kubelet[2805]: W0113 20:45:58.457333 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.457394 kubelet[2805]: E0113 20:45:58.457344 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.457621 kubelet[2805]: E0113 20:45:58.457607 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.457621 kubelet[2805]: W0113 20:45:58.457619 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.457680 kubelet[2805]: E0113 20:45:58.457632 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.457891 kubelet[2805]: E0113 20:45:58.457875 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.457891 kubelet[2805]: W0113 20:45:58.457889 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.457951 kubelet[2805]: E0113 20:45:58.457902 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.458115 kubelet[2805]: E0113 20:45:58.458101 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.458115 kubelet[2805]: W0113 20:45:58.458113 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.458166 kubelet[2805]: E0113 20:45:58.458126 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.458313 kubelet[2805]: E0113 20:45:58.458299 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.458313 kubelet[2805]: W0113 20:45:58.458311 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.458361 kubelet[2805]: E0113 20:45:58.458323 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.458562 kubelet[2805]: E0113 20:45:58.458538 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.458562 kubelet[2805]: W0113 20:45:58.458551 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.458562 kubelet[2805]: E0113 20:45:58.458563 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.458776 kubelet[2805]: E0113 20:45:58.458761 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.458776 kubelet[2805]: W0113 20:45:58.458773 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.458776 kubelet[2805]: E0113 20:45:58.458786 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.458965 kubelet[2805]: E0113 20:45:58.458950 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.458965 kubelet[2805]: W0113 20:45:58.458962 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.459025 kubelet[2805]: E0113 20:45:58.458974 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.459150 kubelet[2805]: E0113 20:45:58.459135 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.459150 kubelet[2805]: W0113 20:45:58.459147 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.459195 kubelet[2805]: E0113 20:45:58.459160 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.557226 kubelet[2805]: E0113 20:45:58.557185 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.557226 kubelet[2805]: W0113 20:45:58.557212 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.557375 kubelet[2805]: E0113 20:45:58.557240 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.557485 kubelet[2805]: E0113 20:45:58.557468 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.557485 kubelet[2805]: W0113 20:45:58.557482 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.557538 kubelet[2805]: E0113 20:45:58.557502 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.557718 kubelet[2805]: E0113 20:45:58.557701 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.557718 kubelet[2805]: W0113 20:45:58.557714 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.557804 kubelet[2805]: E0113 20:45:58.557734 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.558019 kubelet[2805]: E0113 20:45:58.557995 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.558019 kubelet[2805]: W0113 20:45:58.558009 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.558019 kubelet[2805]: E0113 20:45:58.558028 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.558263 kubelet[2805]: E0113 20:45:58.558248 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.558263 kubelet[2805]: W0113 20:45:58.558259 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.558331 kubelet[2805]: E0113 20:45:58.558276 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.558507 kubelet[2805]: E0113 20:45:58.558489 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.558507 kubelet[2805]: W0113 20:45:58.558500 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.558578 kubelet[2805]: E0113 20:45:58.558515 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.558760 kubelet[2805]: E0113 20:45:58.558733 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.558791 kubelet[2805]: W0113 20:45:58.558761 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.558791 kubelet[2805]: E0113 20:45:58.558782 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.559028 kubelet[2805]: E0113 20:45:58.559007 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.559028 kubelet[2805]: W0113 20:45:58.559023 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.559090 kubelet[2805]: E0113 20:45:58.559055 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.559220 kubelet[2805]: E0113 20:45:58.559202 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.559220 kubelet[2805]: W0113 20:45:58.559215 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.559279 kubelet[2805]: E0113 20:45:58.559259 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.559452 kubelet[2805]: E0113 20:45:58.559434 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.559452 kubelet[2805]: W0113 20:45:58.559447 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.559505 kubelet[2805]: E0113 20:45:58.559465 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.559670 kubelet[2805]: E0113 20:45:58.559652 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.559670 kubelet[2805]: W0113 20:45:58.559665 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.559724 kubelet[2805]: E0113 20:45:58.559681 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.559890 kubelet[2805]: E0113 20:45:58.559869 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.559919 kubelet[2805]: W0113 20:45:58.559893 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.559919 kubelet[2805]: E0113 20:45:58.559911 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.560111 kubelet[2805]: E0113 20:45:58.560092 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.560111 kubelet[2805]: W0113 20:45:58.560104 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.560173 kubelet[2805]: E0113 20:45:58.560120 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.560400 kubelet[2805]: E0113 20:45:58.560358 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.560462 kubelet[2805]: W0113 20:45:58.560399 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.560462 kubelet[2805]: E0113 20:45:58.560430 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.560681 kubelet[2805]: E0113 20:45:58.560665 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.560681 kubelet[2805]: W0113 20:45:58.560676 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.560771 kubelet[2805]: E0113 20:45:58.560687 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.560915 kubelet[2805]: E0113 20:45:58.560901 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.560915 kubelet[2805]: W0113 20:45:58.560912 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.560971 kubelet[2805]: E0113 20:45:58.560932 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.561314 kubelet[2805]: E0113 20:45:58.561298 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.561314 kubelet[2805]: W0113 20:45:58.561311 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.561447 kubelet[2805]: E0113 20:45:58.561331 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:58.561552 kubelet[2805]: E0113 20:45:58.561538 2805 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:45:58.561552 kubelet[2805]: W0113 20:45:58.561549 2805 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:45:58.561616 kubelet[2805]: E0113 20:45:58.561563 2805 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:45:59.462101 containerd[1563]: time="2025-01-13T20:45:59.462048717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:59.463072 containerd[1563]: time="2025-01-13T20:45:59.463034820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 13 20:45:59.464276 containerd[1563]: time="2025-01-13T20:45:59.464244184Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:59.466344 containerd[1563]: time="2025-01-13T20:45:59.466315675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:45:59.467258 containerd[1563]: time="2025-01-13T20:45:59.467185287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.896622002s" Jan 13 20:45:59.467258 containerd[1563]: time="2025-01-13T20:45:59.467242315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:45:59.472125 containerd[1563]: time="2025-01-13T20:45:59.471567614Z" level=info msg="CreateContainer within sandbox \"66a71c36a3a0d7e1ec3adcd026b9d7ae37c218f10c61200bb91cc41cc16124f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:45:59.486601 containerd[1563]: time="2025-01-13T20:45:59.486541500Z" level=info msg="CreateContainer within sandbox \"66a71c36a3a0d7e1ec3adcd026b9d7ae37c218f10c61200bb91cc41cc16124f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c8f746e42782b66beca54e10c4fd91eb8e45b1f4b29fc334d97b796e5ef120fd\"" Jan 13 20:45:59.487259 containerd[1563]: time="2025-01-13T20:45:59.487089646Z" level=info msg="StartContainer for \"c8f746e42782b66beca54e10c4fd91eb8e45b1f4b29fc334d97b796e5ef120fd\"" Jan 13 20:45:59.554781 containerd[1563]: time="2025-01-13T20:45:59.554717699Z" level=info msg="StartContainer for \"c8f746e42782b66beca54e10c4fd91eb8e45b1f4b29fc334d97b796e5ef120fd\" returns successfully" Jan 13 20:45:59.655582 containerd[1563]: time="2025-01-13T20:45:59.655514616Z" level=info msg="shim disconnected" id=c8f746e42782b66beca54e10c4fd91eb8e45b1f4b29fc334d97b796e5ef120fd namespace=k8s.io Jan 13 20:45:59.655582 containerd[1563]: time="2025-01-13T20:45:59.655576524Z" level=warning msg="cleaning up after shim disconnected" id=c8f746e42782b66beca54e10c4fd91eb8e45b1f4b29fc334d97b796e5ef120fd namespace=k8s.io Jan 13 20:45:59.655582 containerd[1563]: time="2025-01-13T20:45:59.655593369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:00.076990 kubelet[2805]: I0113 20:46:00.076941 2805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:00.077924 kubelet[2805]: E0113 20:46:00.077748 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:00.381540 kubelet[2805]: E0113 20:46:00.381446 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:46:00.452858 kubelet[2805]: E0113 20:46:00.452827 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:00.453094 kubelet[2805]: E0113 20:46:00.453013 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:00.453413 containerd[1563]: time="2025-01-13T20:46:00.453357552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:46:00.483164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8f746e42782b66beca54e10c4fd91eb8e45b1f4b29fc334d97b796e5ef120fd-rootfs.mount: Deactivated successfully. Jan 13 20:46:01.001801 systemd[1]: Started sshd@7-10.0.0.149:22-10.0.0.1:37954.service - OpenSSH per-connection server daemon (10.0.0.1:37954). Jan 13 20:46:01.036037 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 37954 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:01.037819 sshd-session[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:01.042436 systemd-logind[1542]: New session 8 of user core. Jan 13 20:46:01.051633 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:46:01.174000 sshd[3513]: Connection closed by 10.0.0.1 port 37954 Jan 13 20:46:01.174443 sshd-session[3510]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:01.177952 systemd[1]: sshd@7-10.0.0.149:22-10.0.0.1:37954.service: Deactivated successfully. Jan 13 20:46:01.180529 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:46:01.180538 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:46:01.181674 systemd-logind[1542]: Removed session 8. Jan 13 20:46:02.381491 kubelet[2805]: E0113 20:46:02.381442 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:46:04.381908 kubelet[2805]: E0113 20:46:04.381848 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:46:05.736770 containerd[1563]: time="2025-01-13T20:46:05.736711633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.737427 containerd[1563]: time="2025-01-13T20:46:05.737390693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:46:05.738599 containerd[1563]: time="2025-01-13T20:46:05.738569806Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.740986 containerd[1563]: time="2025-01-13T20:46:05.740953545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:05.741585 containerd[1563]: time="2025-01-13T20:46:05.741554665Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.288139194s" Jan 13 20:46:05.741585 containerd[1563]: time="2025-01-13T20:46:05.741583444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:46:05.743392 containerd[1563]: time="2025-01-13T20:46:05.743331592Z" level=info msg="CreateContainer within sandbox \"66a71c36a3a0d7e1ec3adcd026b9d7ae37c218f10c61200bb91cc41cc16124f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:46:05.757078 containerd[1563]: time="2025-01-13T20:46:05.757024386Z" level=info msg="CreateContainer within sandbox \"66a71c36a3a0d7e1ec3adcd026b9d7ae37c218f10c61200bb91cc41cc16124f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fa36d727b8c8aae353063f2937ab2382ae12974d087ac2bd4b862087c6cdb9b3\"" Jan 13 20:46:05.757601 containerd[1563]: time="2025-01-13T20:46:05.757565033Z" level=info msg="StartContainer for \"fa36d727b8c8aae353063f2937ab2382ae12974d087ac2bd4b862087c6cdb9b3\"" Jan 13 20:46:06.194697 systemd[1]: Started sshd@8-10.0.0.149:22-10.0.0.1:58440.service - OpenSSH per-connection server daemon (10.0.0.1:58440). Jan 13 20:46:06.205256 containerd[1563]: time="2025-01-13T20:46:06.205203980Z" level=info msg="StartContainer for \"fa36d727b8c8aae353063f2937ab2382ae12974d087ac2bd4b862087c6cdb9b3\" returns successfully" Jan 13 20:46:06.261574 sshd[3570]: Accepted publickey for core from 10.0.0.1 port 58440 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:06.263426 sshd-session[3570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:06.267743 systemd-logind[1542]: New session 9 of user core. Jan 13 20:46:06.276812 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:46:06.382790 kubelet[2805]: E0113 20:46:06.381996 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:46:06.463886 kubelet[2805]: E0113 20:46:06.463751 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:06.585311 sshd[3573]: Connection closed by 10.0.0.1 port 58440 Jan 13 20:46:06.586027 sshd-session[3570]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:06.589651 systemd[1]: sshd@8-10.0.0.149:22-10.0.0.1:58440.service: Deactivated successfully. Jan 13 20:46:06.594767 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:46:06.595799 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:46:06.597843 systemd-logind[1542]: Removed session 9. Jan 13 20:46:07.187342 containerd[1563]: time="2025-01-13T20:46:07.187142423Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:46:07.209515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa36d727b8c8aae353063f2937ab2382ae12974d087ac2bd4b862087c6cdb9b3-rootfs.mount: Deactivated successfully. Jan 13 20:46:07.214091 containerd[1563]: time="2025-01-13T20:46:07.214015795Z" level=info msg="shim disconnected" id=fa36d727b8c8aae353063f2937ab2382ae12974d087ac2bd4b862087c6cdb9b3 namespace=k8s.io Jan 13 20:46:07.214206 containerd[1563]: time="2025-01-13T20:46:07.214088934Z" level=warning msg="cleaning up after shim disconnected" id=fa36d727b8c8aae353063f2937ab2382ae12974d087ac2bd4b862087c6cdb9b3 namespace=k8s.io Jan 13 20:46:07.214206 containerd[1563]: time="2025-01-13T20:46:07.214104847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:46:07.246755 kubelet[2805]: I0113 20:46:07.246730 2805 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:46:07.265530 kubelet[2805]: I0113 20:46:07.265445 2805 topology_manager.go:215] "Topology Admit Handler" podUID="fe5ec2be-450d-4cfd-bd06-ed7ea343b06b" podNamespace="kube-system" podName="coredns-76f75df574-qqtwl" Jan 13 20:46:07.273245 kubelet[2805]: I0113 20:46:07.273070 2805 topology_manager.go:215] "Topology Admit Handler" podUID="25c9c48c-a9ce-4e21-b742-444ac830dfed" podNamespace="calico-system" podName="calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:07.275893 kubelet[2805]: I0113 20:46:07.275145 2805 topology_manager.go:215] "Topology Admit Handler" podUID="79a4d693-fc20-457c-9d67-0b17b1742b23" podNamespace="calico-apiserver" podName="calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:07.275893 kubelet[2805]: I0113 20:46:07.275365 2805 topology_manager.go:215] "Topology Admit Handler" podUID="75c72d4a-2713-4d59-9a08-2119aee5935e" podNamespace="calico-apiserver" podName="calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:07.276338 kubelet[2805]: I0113 20:46:07.276320 2805 topology_manager.go:215] "Topology Admit Handler" podUID="0607ab24-d1bd-4a5b-b65b-ec7237434967" podNamespace="kube-system" podName="coredns-76f75df574-5dksv" Jan 13 20:46:07.422033 kubelet[2805]: I0113 20:46:07.421993 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25c9c48c-a9ce-4e21-b742-444ac830dfed-tigera-ca-bundle\") pod \"calico-kube-controllers-dfc47ddd6-rhff2\" (UID: \"25c9c48c-a9ce-4e21-b742-444ac830dfed\") " pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:07.422033 kubelet[2805]: I0113 20:46:07.422041 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe5ec2be-450d-4cfd-bd06-ed7ea343b06b-config-volume\") pod \"coredns-76f75df574-qqtwl\" (UID: \"fe5ec2be-450d-4cfd-bd06-ed7ea343b06b\") " pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:07.422567 kubelet[2805]: I0113 20:46:07.422060 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8b4k\" (UniqueName: \"kubernetes.io/projected/fe5ec2be-450d-4cfd-bd06-ed7ea343b06b-kube-api-access-m8b4k\") pod \"coredns-76f75df574-qqtwl\" (UID: \"fe5ec2be-450d-4cfd-bd06-ed7ea343b06b\") " pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:07.422567 kubelet[2805]: I0113 20:46:07.422086 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmchq\" (UniqueName: \"kubernetes.io/projected/75c72d4a-2713-4d59-9a08-2119aee5935e-kube-api-access-rmchq\") pod \"calico-apiserver-5664ddbb6d-jj75b\" (UID: \"75c72d4a-2713-4d59-9a08-2119aee5935e\") " pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:07.422567 kubelet[2805]: I0113 20:46:07.422114 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0607ab24-d1bd-4a5b-b65b-ec7237434967-config-volume\") pod \"coredns-76f75df574-5dksv\" (UID: \"0607ab24-d1bd-4a5b-b65b-ec7237434967\") " pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:07.422567 kubelet[2805]: I0113 20:46:07.422269 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdx5w\" (UniqueName: \"kubernetes.io/projected/79a4d693-fc20-457c-9d67-0b17b1742b23-kube-api-access-hdx5w\") pod \"calico-apiserver-5664ddbb6d-trmvh\" (UID: \"79a4d693-fc20-457c-9d67-0b17b1742b23\") " pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:07.422567 kubelet[2805]: I0113 20:46:07.422308 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/75c72d4a-2713-4d59-9a08-2119aee5935e-calico-apiserver-certs\") pod \"calico-apiserver-5664ddbb6d-jj75b\" (UID: \"75c72d4a-2713-4d59-9a08-2119aee5935e\") " pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:07.422698 kubelet[2805]: I0113 20:46:07.422327 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/79a4d693-fc20-457c-9d67-0b17b1742b23-calico-apiserver-certs\") pod \"calico-apiserver-5664ddbb6d-trmvh\" (UID: \"79a4d693-fc20-457c-9d67-0b17b1742b23\") " pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:07.422698 kubelet[2805]: I0113 20:46:07.422371 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkpk7\" (UniqueName: \"kubernetes.io/projected/0607ab24-d1bd-4a5b-b65b-ec7237434967-kube-api-access-xkpk7\") pod \"coredns-76f75df574-5dksv\" (UID: \"0607ab24-d1bd-4a5b-b65b-ec7237434967\") " pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:07.422698 kubelet[2805]: I0113 20:46:07.422432 2805 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6vnx\" (UniqueName: \"kubernetes.io/projected/25c9c48c-a9ce-4e21-b742-444ac830dfed-kube-api-access-j6vnx\") pod \"calico-kube-controllers-dfc47ddd6-rhff2\" (UID: \"25c9c48c-a9ce-4e21-b742-444ac830dfed\") " pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:07.467321 kubelet[2805]: E0113 20:46:07.467243 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:07.469641 containerd[1563]: time="2025-01-13T20:46:07.469597484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:46:07.570976 kubelet[2805]: E0113 20:46:07.570914 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:07.571793 containerd[1563]: time="2025-01-13T20:46:07.571717886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:07.578036 containerd[1563]: time="2025-01-13T20:46:07.577978139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:0,}" Jan 13 20:46:07.582374 kubelet[2805]: E0113 20:46:07.582335 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:07.582657 containerd[1563]: time="2025-01-13T20:46:07.582630702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:0,}" Jan 13 20:46:07.582931 containerd[1563]: time="2025-01-13T20:46:07.582906955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:46:07.586362 containerd[1563]: time="2025-01-13T20:46:07.586181006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:46:07.687843 containerd[1563]: time="2025-01-13T20:46:07.687797911Z" level=error msg="Failed to destroy network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.688564 containerd[1563]: time="2025-01-13T20:46:07.688544334Z" level=error msg="encountered an error cleaning up failed sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.688705 containerd[1563]: time="2025-01-13T20:46:07.688662304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.689072 kubelet[2805]: E0113 20:46:07.689021 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.689072 kubelet[2805]: E0113 20:46:07.689085 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:07.689367 kubelet[2805]: E0113 20:46:07.689109 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:07.689367 kubelet[2805]: E0113 20:46:07.689171 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dfc47ddd6-rhff2_calico-system(25c9c48c-a9ce-4e21-b742-444ac830dfed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dfc47ddd6-rhff2_calico-system(25c9c48c-a9ce-4e21-b742-444ac830dfed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" podUID="25c9c48c-a9ce-4e21-b742-444ac830dfed" Jan 13 20:46:07.721626 containerd[1563]: time="2025-01-13T20:46:07.721499024Z" level=error msg="Failed to destroy network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.722357 containerd[1563]: time="2025-01-13T20:46:07.722104750Z" level=error msg="encountered an error cleaning up failed sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.722357 containerd[1563]: time="2025-01-13T20:46:07.722171105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.722548 kubelet[2805]: E0113 20:46:07.722434 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.722548 kubelet[2805]: E0113 20:46:07.722509 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:07.722548 kubelet[2805]: E0113 20:46:07.722535 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:07.722871 kubelet[2805]: E0113 20:46:07.722853 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qqtwl_kube-system(fe5ec2be-450d-4cfd-bd06-ed7ea343b06b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qqtwl_kube-system(fe5ec2be-450d-4cfd-bd06-ed7ea343b06b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qqtwl" podUID="fe5ec2be-450d-4cfd-bd06-ed7ea343b06b" Jan 13 20:46:07.726322 containerd[1563]: time="2025-01-13T20:46:07.726173912Z" level=error msg="Failed to destroy network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.726787 containerd[1563]: time="2025-01-13T20:46:07.726760138Z" level=error msg="encountered an error cleaning up failed sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.726931 containerd[1563]: time="2025-01-13T20:46:07.726898851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.727419 kubelet[2805]: E0113 20:46:07.727301 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.727419 kubelet[2805]: E0113 20:46:07.727367 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:07.727419 kubelet[2805]: E0113 20:46:07.727402 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:07.727569 kubelet[2805]: E0113 20:46:07.727480 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664ddbb6d-trmvh_calico-apiserver(79a4d693-fc20-457c-9d67-0b17b1742b23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664ddbb6d-trmvh_calico-apiserver(79a4d693-fc20-457c-9d67-0b17b1742b23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" podUID="79a4d693-fc20-457c-9d67-0b17b1742b23" Jan 13 20:46:07.729464 containerd[1563]: time="2025-01-13T20:46:07.729416227Z" level=error msg="Failed to destroy network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.729930 containerd[1563]: time="2025-01-13T20:46:07.729786583Z" level=error msg="encountered an error cleaning up failed sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.729930 containerd[1563]: time="2025-01-13T20:46:07.729843760Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.730210 kubelet[2805]: E0113 20:46:07.730173 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.730290 kubelet[2805]: E0113 20:46:07.730238 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:07.730290 kubelet[2805]: E0113 20:46:07.730266 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:07.730523 kubelet[2805]: E0113 20:46:07.730343 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5dksv_kube-system(0607ab24-d1bd-4a5b-b65b-ec7237434967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5dksv_kube-system(0607ab24-d1bd-4a5b-b65b-ec7237434967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5dksv" podUID="0607ab24-d1bd-4a5b-b65b-ec7237434967" Jan 13 20:46:07.732336 containerd[1563]: time="2025-01-13T20:46:07.732304290Z" level=error msg="Failed to destroy network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.732754 containerd[1563]: time="2025-01-13T20:46:07.732719909Z" level=error msg="encountered an error cleaning up failed sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.732814 containerd[1563]: time="2025-01-13T20:46:07.732772255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.732984 kubelet[2805]: E0113 20:46:07.732962 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:07.733061 kubelet[2805]: E0113 20:46:07.732999 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:07.733061 kubelet[2805]: E0113 20:46:07.733017 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:07.733120 kubelet[2805]: E0113 20:46:07.733068 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664ddbb6d-jj75b_calico-apiserver(75c72d4a-2713-4d59-9a08-2119aee5935e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664ddbb6d-jj75b_calico-apiserver(75c72d4a-2713-4d59-9a08-2119aee5935e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" podUID="75c72d4a-2713-4d59-9a08-2119aee5935e" Jan 13 20:46:08.212685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6-shm.mount: Deactivated successfully. Jan 13 20:46:08.385841 containerd[1563]: time="2025-01-13T20:46:08.385801801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wkvn,Uid:31e31ef7-4073-4fa6-8a57-6102fb32a16b,Namespace:calico-system,Attempt:0,}" Jan 13 20:46:08.469915 kubelet[2805]: I0113 20:46:08.469782 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b" Jan 13 20:46:08.473668 containerd[1563]: time="2025-01-13T20:46:08.473611152Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\"" Jan 13 20:46:08.473830 containerd[1563]: time="2025-01-13T20:46:08.473815749Z" level=info msg="Ensure that sandbox 08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b in task-service has been cleanup successfully" Jan 13 20:46:08.474146 containerd[1563]: time="2025-01-13T20:46:08.474113847Z" level=info msg="TearDown network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" successfully" Jan 13 20:46:08.474146 containerd[1563]: time="2025-01-13T20:46:08.474130741Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" returns successfully" Jan 13 20:46:08.477016 systemd[1]: run-netns-cni\x2d1716363e\x2d5aeb\x2d4bd6\x2daaf5\x2d8950e866615d.mount: Deactivated successfully. Jan 13 20:46:08.486276 containerd[1563]: time="2025-01-13T20:46:08.485920873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:46:08.486544 kubelet[2805]: I0113 20:46:08.486496 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2" Jan 13 20:46:08.487563 containerd[1563]: time="2025-01-13T20:46:08.487520082Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\"" Jan 13 20:46:08.487766 containerd[1563]: time="2025-01-13T20:46:08.487739959Z" level=info msg="Ensure that sandbox 236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2 in task-service has been cleanup successfully" Jan 13 20:46:08.487989 kubelet[2805]: I0113 20:46:08.487963 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd" Jan 13 20:46:08.490256 containerd[1563]: time="2025-01-13T20:46:08.490200733Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\"" Jan 13 20:46:08.490296 systemd[1]: run-netns-cni\x2d05eefd6c\x2d793d\x2d7bce\x2d3693\x2df6e216ae2d55.mount: Deactivated successfully. Jan 13 20:46:08.490446 containerd[1563]: time="2025-01-13T20:46:08.490429088Z" level=info msg="Ensure that sandbox 76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd in task-service has been cleanup successfully" Jan 13 20:46:08.490477 containerd[1563]: time="2025-01-13T20:46:08.490456765Z" level=info msg="TearDown network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" successfully" Jan 13 20:46:08.490501 containerd[1563]: time="2025-01-13T20:46:08.490473719Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" returns successfully" Jan 13 20:46:08.490859 containerd[1563]: time="2025-01-13T20:46:08.490803241Z" level=info msg="TearDown network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" successfully" Jan 13 20:46:08.490859 containerd[1563]: time="2025-01-13T20:46:08.490823392Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" returns successfully" Jan 13 20:46:08.490952 kubelet[2805]: E0113 20:46:08.490867 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:08.491175 kubelet[2805]: I0113 20:46:08.491144 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6" Jan 13 20:46:08.491274 containerd[1563]: time="2025-01-13T20:46:08.491225340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:1,}" Jan 13 20:46:08.491306 containerd[1563]: time="2025-01-13T20:46:08.491269912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:1,}" Jan 13 20:46:08.492200 containerd[1563]: time="2025-01-13T20:46:08.492173372Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\"" Jan 13 20:46:08.492351 containerd[1563]: time="2025-01-13T20:46:08.492327155Z" level=info msg="Ensure that sandbox 03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6 in task-service has been cleanup successfully" Jan 13 20:46:08.492467 kubelet[2805]: I0113 20:46:08.492449 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6" Jan 13 20:46:08.492830 containerd[1563]: time="2025-01-13T20:46:08.492807103Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\"" Jan 13 20:46:08.493170 containerd[1563]: time="2025-01-13T20:46:08.492913902Z" level=info msg="TearDown network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" successfully" Jan 13 20:46:08.493170 containerd[1563]: time="2025-01-13T20:46:08.492932770Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" returns successfully" Jan 13 20:46:08.493170 containerd[1563]: time="2025-01-13T20:46:08.493042834Z" level=info msg="Ensure that sandbox b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6 in task-service has been cleanup successfully" Jan 13 20:46:08.493268 kubelet[2805]: E0113 20:46:08.493074 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:08.493298 containerd[1563]: time="2025-01-13T20:46:08.493263895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:1,}" Jan 13 20:46:08.493765 containerd[1563]: time="2025-01-13T20:46:08.493369841Z" level=info msg="TearDown network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" successfully" Jan 13 20:46:08.493765 containerd[1563]: time="2025-01-13T20:46:08.493761308Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" returns successfully" Jan 13 20:46:08.494160 containerd[1563]: time="2025-01-13T20:46:08.494138988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:46:08.494827 systemd[1]: run-netns-cni\x2d2913c9de\x2da486\x2deeb2\x2dda6e\x2d1cdf37774285.mount: Deactivated successfully. Jan 13 20:46:08.495011 systemd[1]: run-netns-cni\x2d878b77f6\x2d4bb2\x2d29ba\x2d60d5\x2d45ee764dc41b.mount: Deactivated successfully. Jan 13 20:46:08.497886 systemd[1]: run-netns-cni\x2d64b69694\x2ddd86\x2d5fbc\x2dd64b\x2dd60bc9ed21be.mount: Deactivated successfully. Jan 13 20:46:09.195493 containerd[1563]: time="2025-01-13T20:46:09.195438640Z" level=error msg="Failed to destroy network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.196133 containerd[1563]: time="2025-01-13T20:46:09.196105588Z" level=error msg="encountered an error cleaning up failed sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.196268 containerd[1563]: time="2025-01-13T20:46:09.196243198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.198470 kubelet[2805]: E0113 20:46:09.196661 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.198470 kubelet[2805]: E0113 20:46:09.196739 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:09.198470 kubelet[2805]: E0113 20:46:09.196767 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:09.198615 kubelet[2805]: E0113 20:46:09.196832 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qqtwl_kube-system(fe5ec2be-450d-4cfd-bd06-ed7ea343b06b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qqtwl_kube-system(fe5ec2be-450d-4cfd-bd06-ed7ea343b06b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qqtwl" podUID="fe5ec2be-450d-4cfd-bd06-ed7ea343b06b" Jan 13 20:46:09.198683 containerd[1563]: time="2025-01-13T20:46:09.198523798Z" level=error msg="Failed to destroy network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.199931 containerd[1563]: time="2025-01-13T20:46:09.199905750Z" level=error msg="encountered an error cleaning up failed sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.200510 containerd[1563]: time="2025-01-13T20:46:09.200388783Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.201682 kubelet[2805]: E0113 20:46:09.201600 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.201682 kubelet[2805]: E0113 20:46:09.201667 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:09.201805 kubelet[2805]: E0113 20:46:09.201696 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:09.201805 kubelet[2805]: E0113 20:46:09.201761 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664ddbb6d-jj75b_calico-apiserver(75c72d4a-2713-4d59-9a08-2119aee5935e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664ddbb6d-jj75b_calico-apiserver(75c72d4a-2713-4d59-9a08-2119aee5935e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" podUID="75c72d4a-2713-4d59-9a08-2119aee5935e" Jan 13 20:46:09.203460 containerd[1563]: time="2025-01-13T20:46:09.203420682Z" level=error msg="Failed to destroy network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.204460 containerd[1563]: time="2025-01-13T20:46:09.204425467Z" level=error msg="encountered an error cleaning up failed sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.204620 containerd[1563]: time="2025-01-13T20:46:09.204592937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wkvn,Uid:31e31ef7-4073-4fa6-8a57-6102fb32a16b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.205247 kubelet[2805]: E0113 20:46:09.205217 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.205308 kubelet[2805]: E0113 20:46:09.205297 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:46:09.205351 kubelet[2805]: E0113 20:46:09.205320 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:46:09.205436 kubelet[2805]: E0113 20:46:09.205424 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2wkvn_calico-system(31e31ef7-4073-4fa6-8a57-6102fb32a16b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2wkvn_calico-system(31e31ef7-4073-4fa6-8a57-6102fb32a16b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:46:09.214672 containerd[1563]: time="2025-01-13T20:46:09.214623929Z" level=error msg="Failed to destroy network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.215127 containerd[1563]: time="2025-01-13T20:46:09.215104477Z" level=error msg="encountered an error cleaning up failed sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.215251 containerd[1563]: time="2025-01-13T20:46:09.215170711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.215627 kubelet[2805]: E0113 20:46:09.215596 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.215756 kubelet[2805]: E0113 20:46:09.215740 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:09.215800 kubelet[2805]: E0113 20:46:09.215773 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:09.216023 kubelet[2805]: E0113 20:46:09.216003 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664ddbb6d-trmvh_calico-apiserver(79a4d693-fc20-457c-9d67-0b17b1742b23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664ddbb6d-trmvh_calico-apiserver(79a4d693-fc20-457c-9d67-0b17b1742b23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" podUID="79a4d693-fc20-457c-9d67-0b17b1742b23" Jan 13 20:46:09.219778 containerd[1563]: time="2025-01-13T20:46:09.219711330Z" level=error msg="Failed to destroy network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.220165 containerd[1563]: time="2025-01-13T20:46:09.220137678Z" level=error msg="encountered an error cleaning up failed sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.220234 containerd[1563]: time="2025-01-13T20:46:09.220189313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.220530 kubelet[2805]: E0113 20:46:09.220508 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.220590 kubelet[2805]: E0113 20:46:09.220569 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:09.220621 kubelet[2805]: E0113 20:46:09.220596 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:09.220660 kubelet[2805]: E0113 20:46:09.220648 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5dksv_kube-system(0607ab24-d1bd-4a5b-b65b-ec7237434967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5dksv_kube-system(0607ab24-d1bd-4a5b-b65b-ec7237434967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5dksv" podUID="0607ab24-d1bd-4a5b-b65b-ec7237434967" Jan 13 20:46:09.222185 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5-shm.mount: Deactivated successfully. Jan 13 20:46:09.225216 containerd[1563]: time="2025-01-13T20:46:09.225169456Z" level=error msg="Failed to destroy network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.225667 containerd[1563]: time="2025-01-13T20:46:09.225633421Z" level=error msg="encountered an error cleaning up failed sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.225729 containerd[1563]: time="2025-01-13T20:46:09.225702010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.225845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246-shm.mount: Deactivated successfully. Jan 13 20:46:09.226008 kubelet[2805]: E0113 20:46:09.225985 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.226099 kubelet[2805]: E0113 20:46:09.226081 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:09.226140 kubelet[2805]: E0113 20:46:09.226109 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:09.226187 kubelet[2805]: E0113 20:46:09.226160 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dfc47ddd6-rhff2_calico-system(25c9c48c-a9ce-4e21-b742-444ac830dfed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dfc47ddd6-rhff2_calico-system(25c9c48c-a9ce-4e21-b742-444ac830dfed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" podUID="25c9c48c-a9ce-4e21-b742-444ac830dfed" Jan 13 20:46:09.229604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c-shm.mount: Deactivated successfully. Jan 13 20:46:09.495603 kubelet[2805]: I0113 20:46:09.495433 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d" Jan 13 20:46:09.496070 containerd[1563]: time="2025-01-13T20:46:09.496038182Z" level=info msg="StopPodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\"" Jan 13 20:46:09.498290 containerd[1563]: time="2025-01-13T20:46:09.496268761Z" level=info msg="Ensure that sandbox 8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d in task-service has been cleanup successfully" Jan 13 20:46:09.498290 containerd[1563]: time="2025-01-13T20:46:09.496616889Z" level=info msg="TearDown network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" successfully" Jan 13 20:46:09.498290 containerd[1563]: time="2025-01-13T20:46:09.496634415Z" level=info msg="StopPodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" returns successfully" Jan 13 20:46:09.498290 containerd[1563]: time="2025-01-13T20:46:09.497605861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wkvn,Uid:31e31ef7-4073-4fa6-8a57-6102fb32a16b,Namespace:calico-system,Attempt:1,}" Jan 13 20:46:09.498290 containerd[1563]: time="2025-01-13T20:46:09.497838315Z" level=info msg="StopPodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\"" Jan 13 20:46:09.498290 containerd[1563]: time="2025-01-13T20:46:09.498033732Z" level=info msg="Ensure that sandbox bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5 in task-service has been cleanup successfully" Jan 13 20:46:09.498504 kubelet[2805]: I0113 20:46:09.496905 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5" Jan 13 20:46:09.498618 containerd[1563]: time="2025-01-13T20:46:09.498592659Z" level=info msg="TearDown network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" successfully" Jan 13 20:46:09.498698 containerd[1563]: time="2025-01-13T20:46:09.498679246Z" level=info msg="StopPodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" returns successfully" Jan 13 20:46:09.499215 containerd[1563]: time="2025-01-13T20:46:09.499194514Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\"" Jan 13 20:46:09.499551 containerd[1563]: time="2025-01-13T20:46:09.499435775Z" level=info msg="TearDown network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" successfully" Jan 13 20:46:09.499551 containerd[1563]: time="2025-01-13T20:46:09.499451186Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" returns successfully" Jan 13 20:46:09.500325 systemd[1]: run-netns-cni\x2d82a0be0e\x2de0a4\x2d24ce\x2de8bd\x2df4b453936882.mount: Deactivated successfully. Jan 13 20:46:09.501215 kubelet[2805]: I0113 20:46:09.501008 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf" Jan 13 20:46:09.502624 containerd[1563]: time="2025-01-13T20:46:09.502169106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:46:09.502931 containerd[1563]: time="2025-01-13T20:46:09.502904714Z" level=info msg="StopPodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\"" Jan 13 20:46:09.503111 containerd[1563]: time="2025-01-13T20:46:09.503089129Z" level=info msg="Ensure that sandbox 136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf in task-service has been cleanup successfully" Jan 13 20:46:09.503314 containerd[1563]: time="2025-01-13T20:46:09.503293074Z" level=info msg="TearDown network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" successfully" Jan 13 20:46:09.503520 containerd[1563]: time="2025-01-13T20:46:09.503453280Z" level=info msg="StopPodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" returns successfully" Jan 13 20:46:09.503947 containerd[1563]: time="2025-01-13T20:46:09.503923347Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\"" Jan 13 20:46:09.504034 containerd[1563]: time="2025-01-13T20:46:09.504016135Z" level=info msg="TearDown network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" successfully" Jan 13 20:46:09.504075 containerd[1563]: time="2025-01-13T20:46:09.504032769Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" returns successfully" Jan 13 20:46:09.504349 kubelet[2805]: I0113 20:46:09.504329 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246" Jan 13 20:46:09.505424 containerd[1563]: time="2025-01-13T20:46:09.504867257Z" level=info msg="StopPodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\"" Jan 13 20:46:09.505424 containerd[1563]: time="2025-01-13T20:46:09.505045871Z" level=info msg="Ensure that sandbox a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246 in task-service has been cleanup successfully" Jan 13 20:46:09.505424 containerd[1563]: time="2025-01-13T20:46:09.505253733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:46:09.505675 containerd[1563]: time="2025-01-13T20:46:09.505651183Z" level=info msg="TearDown network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" successfully" Jan 13 20:46:09.505675 containerd[1563]: time="2025-01-13T20:46:09.505672225Z" level=info msg="StopPodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" returns successfully" Jan 13 20:46:09.505932 containerd[1563]: time="2025-01-13T20:46:09.505910971Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\"" Jan 13 20:46:09.506021 containerd[1563]: time="2025-01-13T20:46:09.506002277Z" level=info msg="TearDown network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" successfully" Jan 13 20:46:09.506021 containerd[1563]: time="2025-01-13T20:46:09.506016716Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" returns successfully" Jan 13 20:46:09.506668 kubelet[2805]: E0113 20:46:09.506244 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:09.506756 containerd[1563]: time="2025-01-13T20:46:09.506430368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:2,}" Jan 13 20:46:09.509798 kubelet[2805]: I0113 20:46:09.508755 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e" Jan 13 20:46:09.509922 containerd[1563]: time="2025-01-13T20:46:09.509311881Z" level=info msg="StopPodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\"" Jan 13 20:46:09.509922 containerd[1563]: time="2025-01-13T20:46:09.509703197Z" level=info msg="Ensure that sandbox 62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e in task-service has been cleanup successfully" Jan 13 20:46:09.509922 containerd[1563]: time="2025-01-13T20:46:09.509914678Z" level=info msg="TearDown network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" successfully" Jan 13 20:46:09.510046 containerd[1563]: time="2025-01-13T20:46:09.509930700Z" level=info msg="StopPodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" returns successfully" Jan 13 20:46:09.510641 containerd[1563]: time="2025-01-13T20:46:09.510612728Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\"" Jan 13 20:46:09.511064 containerd[1563]: time="2025-01-13T20:46:09.510959704Z" level=info msg="TearDown network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" successfully" Jan 13 20:46:09.511064 containerd[1563]: time="2025-01-13T20:46:09.511011249Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" returns successfully" Jan 13 20:46:09.511502 kubelet[2805]: E0113 20:46:09.511321 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:09.511731 containerd[1563]: time="2025-01-13T20:46:09.511578172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:2,}" Jan 13 20:46:09.512188 kubelet[2805]: I0113 20:46:09.511865 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c" Jan 13 20:46:09.512283 containerd[1563]: time="2025-01-13T20:46:09.512259348Z" level=info msg="StopPodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\"" Jan 13 20:46:09.512467 containerd[1563]: time="2025-01-13T20:46:09.512448945Z" level=info msg="Ensure that sandbox 607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c in task-service has been cleanup successfully" Jan 13 20:46:09.512710 containerd[1563]: time="2025-01-13T20:46:09.512689143Z" level=info msg="TearDown network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" successfully" Jan 13 20:46:09.512710 containerd[1563]: time="2025-01-13T20:46:09.512707861Z" level=info msg="StopPodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" returns successfully" Jan 13 20:46:09.512974 containerd[1563]: time="2025-01-13T20:46:09.512941426Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\"" Jan 13 20:46:09.513046 containerd[1563]: time="2025-01-13T20:46:09.513015347Z" level=info msg="TearDown network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" successfully" Jan 13 20:46:09.513046 containerd[1563]: time="2025-01-13T20:46:09.513043123Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" returns successfully" Jan 13 20:46:09.513397 containerd[1563]: time="2025-01-13T20:46:09.513355709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:2,}" Jan 13 20:46:09.636757 containerd[1563]: time="2025-01-13T20:46:09.636209152Z" level=error msg="Failed to destroy network for sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.637283 containerd[1563]: time="2025-01-13T20:46:09.637259920Z" level=error msg="encountered an error cleaning up failed sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.637416 containerd[1563]: time="2025-01-13T20:46:09.637392821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wkvn,Uid:31e31ef7-4073-4fa6-8a57-6102fb32a16b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.639005 kubelet[2805]: E0113 20:46:09.638880 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.639186 kubelet[2805]: E0113 20:46:09.639085 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:46:09.639186 kubelet[2805]: E0113 20:46:09.639118 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:46:09.640422 kubelet[2805]: E0113 20:46:09.639303 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2wkvn_calico-system(31e31ef7-4073-4fa6-8a57-6102fb32a16b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2wkvn_calico-system(31e31ef7-4073-4fa6-8a57-6102fb32a16b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:46:09.650301 containerd[1563]: time="2025-01-13T20:46:09.650151963Z" level=error msg="Failed to destroy network for sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.650785 containerd[1563]: time="2025-01-13T20:46:09.650758367Z" level=error msg="encountered an error cleaning up failed sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.650901 containerd[1563]: time="2025-01-13T20:46:09.650879623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.651315 kubelet[2805]: E0113 20:46:09.651279 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.651440 kubelet[2805]: E0113 20:46:09.651359 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:09.651480 kubelet[2805]: E0113 20:46:09.651446 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:09.651532 kubelet[2805]: E0113 20:46:09.651518 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664ddbb6d-trmvh_calico-apiserver(79a4d693-fc20-457c-9d67-0b17b1742b23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664ddbb6d-trmvh_calico-apiserver(79a4d693-fc20-457c-9d67-0b17b1742b23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" podUID="79a4d693-fc20-457c-9d67-0b17b1742b23" Jan 13 20:46:09.664614 containerd[1563]: time="2025-01-13T20:46:09.664566955Z" level=error msg="Failed to destroy network for sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.665897 containerd[1563]: time="2025-01-13T20:46:09.665050388Z" level=error msg="encountered an error cleaning up failed sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.665897 containerd[1563]: time="2025-01-13T20:46:09.665104439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.666019 kubelet[2805]: E0113 20:46:09.665365 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.666019 kubelet[2805]: E0113 20:46:09.665439 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:09.666019 kubelet[2805]: E0113 20:46:09.665466 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:09.666191 containerd[1563]: time="2025-01-13T20:46:09.665951211Z" level=error msg="Failed to destroy network for sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.666222 kubelet[2805]: E0113 20:46:09.665519 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5dksv_kube-system(0607ab24-d1bd-4a5b-b65b-ec7237434967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5dksv_kube-system(0607ab24-d1bd-4a5b-b65b-ec7237434967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5dksv" podUID="0607ab24-d1bd-4a5b-b65b-ec7237434967" Jan 13 20:46:09.667759 containerd[1563]: time="2025-01-13T20:46:09.667693426Z" level=error msg="encountered an error cleaning up failed sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.667759 containerd[1563]: time="2025-01-13T20:46:09.667752938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.667978 containerd[1563]: time="2025-01-13T20:46:09.667704258Z" level=error msg="Failed to destroy network for sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.668120 kubelet[2805]: E0113 20:46:09.668048 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.668120 kubelet[2805]: E0113 20:46:09.668118 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:09.668205 kubelet[2805]: E0113 20:46:09.668140 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:09.668205 kubelet[2805]: E0113 20:46:09.668196 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664ddbb6d-jj75b_calico-apiserver(75c72d4a-2713-4d59-9a08-2119aee5935e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664ddbb6d-jj75b_calico-apiserver(75c72d4a-2713-4d59-9a08-2119aee5935e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" podUID="75c72d4a-2713-4d59-9a08-2119aee5935e" Jan 13 20:46:09.668535 containerd[1563]: time="2025-01-13T20:46:09.668511020Z" level=error msg="encountered an error cleaning up failed sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.668634 containerd[1563]: time="2025-01-13T20:46:09.668616926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.668824 kubelet[2805]: E0113 20:46:09.668805 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.668887 kubelet[2805]: E0113 20:46:09.668859 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:09.668887 kubelet[2805]: E0113 20:46:09.668885 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:09.668942 kubelet[2805]: E0113 20:46:09.668917 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dfc47ddd6-rhff2_calico-system(25c9c48c-a9ce-4e21-b742-444ac830dfed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dfc47ddd6-rhff2_calico-system(25c9c48c-a9ce-4e21-b742-444ac830dfed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" podUID="25c9c48c-a9ce-4e21-b742-444ac830dfed" Jan 13 20:46:09.677486 containerd[1563]: time="2025-01-13T20:46:09.677430169Z" level=error msg="Failed to destroy network for sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.677869 containerd[1563]: time="2025-01-13T20:46:09.677844672Z" level=error msg="encountered an error cleaning up failed sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.677932 containerd[1563]: time="2025-01-13T20:46:09.677913071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.678261 kubelet[2805]: E0113 20:46:09.678240 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:09.678307 kubelet[2805]: E0113 20:46:09.678293 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:09.678349 kubelet[2805]: E0113 20:46:09.678315 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:09.678393 kubelet[2805]: E0113 20:46:09.678367 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qqtwl_kube-system(fe5ec2be-450d-4cfd-bd06-ed7ea343b06b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qqtwl_kube-system(fe5ec2be-450d-4cfd-bd06-ed7ea343b06b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qqtwl" podUID="fe5ec2be-450d-4cfd-bd06-ed7ea343b06b" Jan 13 20:46:10.212075 systemd[1]: run-netns-cni\x2df3b62c19\x2d0f51\x2d7aa3\x2d08d9\x2d02eb7e5b64d7.mount: Deactivated successfully. Jan 13 20:46:10.212279 systemd[1]: run-netns-cni\x2d4c0cfa13\x2d7184\x2dba2c\x2d3996\x2d0b585ea3f770.mount: Deactivated successfully. Jan 13 20:46:10.212442 systemd[1]: run-netns-cni\x2d4c596e24\x2db7b8\x2d1c3d\x2d609f\x2dedf3cee645ca.mount: Deactivated successfully. Jan 13 20:46:10.212608 systemd[1]: run-netns-cni\x2d40cc9154\x2d9a0d\x2d4063\x2ddd94\x2d59dcd39d6770.mount: Deactivated successfully. Jan 13 20:46:10.212746 systemd[1]: run-netns-cni\x2dec6c08be\x2d617f\x2da758\x2d5eab\x2d3971a891afe0.mount: Deactivated successfully. Jan 13 20:46:10.515142 kubelet[2805]: I0113 20:46:10.515029 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4" Jan 13 20:46:10.515988 containerd[1563]: time="2025-01-13T20:46:10.515891757Z" level=info msg="StopPodSandbox for \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\"" Jan 13 20:46:10.518837 containerd[1563]: time="2025-01-13T20:46:10.516083086Z" level=info msg="Ensure that sandbox 77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4 in task-service has been cleanup successfully" Jan 13 20:46:10.518837 containerd[1563]: time="2025-01-13T20:46:10.517815396Z" level=info msg="TearDown network for sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\" successfully" Jan 13 20:46:10.518837 containerd[1563]: time="2025-01-13T20:46:10.517828713Z" level=info msg="StopPodSandbox for \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\" returns successfully" Jan 13 20:46:10.519081 systemd[1]: run-netns-cni\x2d41426afd\x2d9f9c\x2dd25c\x2d9700\x2d9ac1aa94c044.mount: Deactivated successfully. Jan 13 20:46:10.520694 containerd[1563]: time="2025-01-13T20:46:10.520532448Z" level=info msg="StopPodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\"" Jan 13 20:46:10.520694 containerd[1563]: time="2025-01-13T20:46:10.520646379Z" level=info msg="TearDown network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" successfully" Jan 13 20:46:10.520694 containerd[1563]: time="2025-01-13T20:46:10.520657462Z" level=info msg="StopPodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" returns successfully" Jan 13 20:46:10.522979 containerd[1563]: time="2025-01-13T20:46:10.522944650Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\"" Jan 13 20:46:10.523094 containerd[1563]: time="2025-01-13T20:46:10.523078503Z" level=info msg="TearDown network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" successfully" Jan 13 20:46:10.523094 containerd[1563]: time="2025-01-13T20:46:10.523092411Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" returns successfully" Jan 13 20:46:10.523668 kubelet[2805]: I0113 20:46:10.523636 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084" Jan 13 20:46:10.524024 containerd[1563]: time="2025-01-13T20:46:10.524003021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:46:10.524695 containerd[1563]: time="2025-01-13T20:46:10.524569823Z" level=info msg="StopPodSandbox for \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\"" Jan 13 20:46:10.524830 containerd[1563]: time="2025-01-13T20:46:10.524809520Z" level=info msg="Ensure that sandbox e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084 in task-service has been cleanup successfully" Jan 13 20:46:10.525037 containerd[1563]: time="2025-01-13T20:46:10.525005960Z" level=info msg="TearDown network for sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\" successfully" Jan 13 20:46:10.525037 containerd[1563]: time="2025-01-13T20:46:10.525022563Z" level=info msg="StopPodSandbox for \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\" returns successfully" Jan 13 20:46:10.527076 containerd[1563]: time="2025-01-13T20:46:10.527043440Z" level=info msg="StopPodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\"" Jan 13 20:46:10.527144 containerd[1563]: time="2025-01-13T20:46:10.527118262Z" level=info msg="TearDown network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" successfully" Jan 13 20:46:10.527144 containerd[1563]: time="2025-01-13T20:46:10.527131449Z" level=info msg="StopPodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" returns successfully" Jan 13 20:46:10.527189 systemd[1]: run-netns-cni\x2de2872ea1\x2d3186\x2d0b66\x2d33ba\x2d169b4f97dbba.mount: Deactivated successfully. Jan 13 20:46:10.527646 containerd[1563]: time="2025-01-13T20:46:10.527476901Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\"" Jan 13 20:46:10.527646 containerd[1563]: time="2025-01-13T20:46:10.527591353Z" level=info msg="TearDown network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" successfully" Jan 13 20:46:10.527646 containerd[1563]: time="2025-01-13T20:46:10.527601875Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" returns successfully" Jan 13 20:46:10.528142 containerd[1563]: time="2025-01-13T20:46:10.528116842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:3,}" Jan 13 20:46:10.528826 kubelet[2805]: I0113 20:46:10.528422 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60" Jan 13 20:46:10.528950 containerd[1563]: time="2025-01-13T20:46:10.528922959Z" level=info msg="StopPodSandbox for \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\"" Jan 13 20:46:10.529123 containerd[1563]: time="2025-01-13T20:46:10.529106282Z" level=info msg="Ensure that sandbox b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60 in task-service has been cleanup successfully" Jan 13 20:46:10.531598 containerd[1563]: time="2025-01-13T20:46:10.531226482Z" level=info msg="TearDown network for sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\" successfully" Jan 13 20:46:10.531598 containerd[1563]: time="2025-01-13T20:46:10.531244908Z" level=info msg="StopPodSandbox for \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\" returns successfully" Jan 13 20:46:10.532334 containerd[1563]: time="2025-01-13T20:46:10.531842693Z" level=info msg="StopPodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\"" Jan 13 20:46:10.532334 containerd[1563]: time="2025-01-13T20:46:10.531995965Z" level=info msg="TearDown network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" successfully" Jan 13 20:46:10.532334 containerd[1563]: time="2025-01-13T20:46:10.532014362Z" level=info msg="StopPodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" returns successfully" Jan 13 20:46:10.532496 containerd[1563]: time="2025-01-13T20:46:10.532419836Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\"" Jan 13 20:46:10.532706 containerd[1563]: time="2025-01-13T20:46:10.532526694Z" level=info msg="TearDown network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" successfully" Jan 13 20:46:10.532706 containerd[1563]: time="2025-01-13T20:46:10.532547786Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" returns successfully" Jan 13 20:46:10.532934 kubelet[2805]: E0113 20:46:10.532841 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:10.533644 containerd[1563]: time="2025-01-13T20:46:10.533362051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:3,}" Jan 13 20:46:10.534422 kubelet[2805]: I0113 20:46:10.534367 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63" Jan 13 20:46:10.534748 systemd[1]: run-netns-cni\x2d0782399d\x2dd253\x2da4eb\x2d3025\x2dedb605a4f668.mount: Deactivated successfully. Jan 13 20:46:10.534969 containerd[1563]: time="2025-01-13T20:46:10.534936241Z" level=info msg="StopPodSandbox for \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\"" Jan 13 20:46:10.535865 containerd[1563]: time="2025-01-13T20:46:10.535356745Z" level=info msg="Ensure that sandbox 4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63 in task-service has been cleanup successfully" Jan 13 20:46:10.535982 containerd[1563]: time="2025-01-13T20:46:10.535965272Z" level=info msg="TearDown network for sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\" successfully" Jan 13 20:46:10.536028 containerd[1563]: time="2025-01-13T20:46:10.536017388Z" level=info msg="StopPodSandbox for \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\" returns successfully" Jan 13 20:46:10.536482 containerd[1563]: time="2025-01-13T20:46:10.536461410Z" level=info msg="StopPodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\"" Jan 13 20:46:10.536869 containerd[1563]: time="2025-01-13T20:46:10.536827263Z" level=info msg="TearDown network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" successfully" Jan 13 20:46:10.536869 containerd[1563]: time="2025-01-13T20:46:10.536847023Z" level=info msg="StopPodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" returns successfully" Jan 13 20:46:10.537128 kubelet[2805]: I0113 20:46:10.537105 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6" Jan 13 20:46:10.537685 kubelet[2805]: E0113 20:46:10.537523 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:10.537723 containerd[1563]: time="2025-01-13T20:46:10.537212717Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\"" Jan 13 20:46:10.537723 containerd[1563]: time="2025-01-13T20:46:10.537289122Z" level=info msg="TearDown network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" successfully" Jan 13 20:46:10.537723 containerd[1563]: time="2025-01-13T20:46:10.537314414Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" returns successfully" Jan 13 20:46:10.537723 containerd[1563]: time="2025-01-13T20:46:10.537660857Z" level=info msg="StopPodSandbox for \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\"" Jan 13 20:46:10.537795 systemd[1]: run-netns-cni\x2d5f3ba42e\x2dadd4\x2de66b\x2d392a\x2d8111dcec45bc.mount: Deactivated successfully. Jan 13 20:46:10.537936 containerd[1563]: time="2025-01-13T20:46:10.537913641Z" level=info msg="Ensure that sandbox 4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6 in task-service has been cleanup successfully" Jan 13 20:46:10.538604 containerd[1563]: time="2025-01-13T20:46:10.538543712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:3,}" Jan 13 20:46:10.538899 containerd[1563]: time="2025-01-13T20:46:10.538821457Z" level=info msg="TearDown network for sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\" successfully" Jan 13 20:46:10.538899 containerd[1563]: time="2025-01-13T20:46:10.538843381Z" level=info msg="StopPodSandbox for \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\" returns successfully" Jan 13 20:46:10.539328 containerd[1563]: time="2025-01-13T20:46:10.539245808Z" level=info msg="StopPodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\"" Jan 13 20:46:10.539328 containerd[1563]: time="2025-01-13T20:46:10.539325541Z" level=info msg="TearDown network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" successfully" Jan 13 20:46:10.539443 containerd[1563]: time="2025-01-13T20:46:10.539335532Z" level=info msg="StopPodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" returns successfully" Jan 13 20:46:10.539747 containerd[1563]: time="2025-01-13T20:46:10.539724722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wkvn,Uid:31e31ef7-4073-4fa6-8a57-6102fb32a16b,Namespace:calico-system,Attempt:2,}" Jan 13 20:46:10.540074 kubelet[2805]: I0113 20:46:10.540042 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e" Jan 13 20:46:10.540662 containerd[1563]: time="2025-01-13T20:46:10.540631114Z" level=info msg="StopPodSandbox for \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\"" Jan 13 20:46:10.540829 containerd[1563]: time="2025-01-13T20:46:10.540792152Z" level=info msg="Ensure that sandbox 625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e in task-service has been cleanup successfully" Jan 13 20:46:10.541040 containerd[1563]: time="2025-01-13T20:46:10.541020907Z" level=info msg="TearDown network for sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\" successfully" Jan 13 20:46:10.541040 containerd[1563]: time="2025-01-13T20:46:10.541035045Z" level=info msg="StopPodSandbox for \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\" returns successfully" Jan 13 20:46:10.541354 containerd[1563]: time="2025-01-13T20:46:10.541322409Z" level=info msg="StopPodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\"" Jan 13 20:46:10.541471 containerd[1563]: time="2025-01-13T20:46:10.541448335Z" level=info msg="TearDown network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" successfully" Jan 13 20:46:10.541471 containerd[1563]: time="2025-01-13T20:46:10.541468676Z" level=info msg="StopPodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" returns successfully" Jan 13 20:46:10.541750 containerd[1563]: time="2025-01-13T20:46:10.541726530Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\"" Jan 13 20:46:10.541818 containerd[1563]: time="2025-01-13T20:46:10.541808807Z" level=info msg="TearDown network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" successfully" Jan 13 20:46:10.541845 containerd[1563]: time="2025-01-13T20:46:10.541819469Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" returns successfully" Jan 13 20:46:10.542198 containerd[1563]: time="2025-01-13T20:46:10.542178940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:46:11.209961 systemd[1]: run-netns-cni\x2d8d11b9f8\x2d2549\x2dc6b6\x2d731f\x2dde24cb543479.mount: Deactivated successfully. Jan 13 20:46:11.210166 systemd[1]: run-netns-cni\x2dde756ab9\x2d50b7\x2dd69a\x2ddd0d\x2dae94e97e90ae.mount: Deactivated successfully. Jan 13 20:46:11.594631 systemd[1]: Started sshd@9-10.0.0.149:22-10.0.0.1:58456.service - OpenSSH per-connection server daemon (10.0.0.1:58456). Jan 13 20:46:11.632322 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 58456 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:11.644102 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:11.650205 systemd-logind[1542]: New session 10 of user core. Jan 13 20:46:11.655648 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:46:11.767764 sshd[4253]: Connection closed by 10.0.0.1 port 58456 Jan 13 20:46:11.768116 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:11.771624 systemd[1]: sshd@9-10.0.0.149:22-10.0.0.1:58456.service: Deactivated successfully. Jan 13 20:46:11.774182 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:46:11.774308 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:46:11.775454 systemd-logind[1542]: Removed session 10. Jan 13 20:46:13.201281 containerd[1563]: time="2025-01-13T20:46:13.201231469Z" level=error msg="Failed to destroy network for sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.202535 containerd[1563]: time="2025-01-13T20:46:13.202481000Z" level=error msg="encountered an error cleaning up failed sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.202582 containerd[1563]: time="2025-01-13T20:46:13.202553377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.203316 kubelet[2805]: E0113 20:46:13.202862 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.203316 kubelet[2805]: E0113 20:46:13.202926 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:13.203316 kubelet[2805]: E0113 20:46:13.202949 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:13.203709 kubelet[2805]: E0113 20:46:13.203019 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5dksv_kube-system(0607ab24-d1bd-4a5b-b65b-ec7237434967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5dksv_kube-system(0607ab24-d1bd-4a5b-b65b-ec7237434967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5dksv" podUID="0607ab24-d1bd-4a5b-b65b-ec7237434967" Jan 13 20:46:13.205444 containerd[1563]: time="2025-01-13T20:46:13.204680817Z" level=error msg="Failed to destroy network for sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.206399 containerd[1563]: time="2025-01-13T20:46:13.205725162Z" level=error msg="encountered an error cleaning up failed sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.206399 containerd[1563]: time="2025-01-13T20:46:13.205793480Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.206558 kubelet[2805]: E0113 20:46:13.206533 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.206621 kubelet[2805]: E0113 20:46:13.206594 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:13.206658 kubelet[2805]: E0113 20:46:13.206622 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:13.208282 kubelet[2805]: E0113 20:46:13.208250 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664ddbb6d-jj75b_calico-apiserver(75c72d4a-2713-4d59-9a08-2119aee5935e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664ddbb6d-jj75b_calico-apiserver(75c72d4a-2713-4d59-9a08-2119aee5935e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" podUID="75c72d4a-2713-4d59-9a08-2119aee5935e" Jan 13 20:46:13.231893 containerd[1563]: time="2025-01-13T20:46:13.231821985Z" level=error msg="Failed to destroy network for sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.232688 containerd[1563]: time="2025-01-13T20:46:13.232655633Z" level=error msg="encountered an error cleaning up failed sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.232766 containerd[1563]: time="2025-01-13T20:46:13.232737239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.233165 kubelet[2805]: E0113 20:46:13.233137 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.233905 kubelet[2805]: E0113 20:46:13.233332 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:13.233905 kubelet[2805]: E0113 20:46:13.233373 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:13.233905 kubelet[2805]: E0113 20:46:13.233478 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qqtwl_kube-system(fe5ec2be-450d-4cfd-bd06-ed7ea343b06b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qqtwl_kube-system(fe5ec2be-450d-4cfd-bd06-ed7ea343b06b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qqtwl" podUID="fe5ec2be-450d-4cfd-bd06-ed7ea343b06b" Jan 13 20:46:13.235150 containerd[1563]: time="2025-01-13T20:46:13.235105586Z" level=error msg="Failed to destroy network for sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.235984 containerd[1563]: time="2025-01-13T20:46:13.235951570Z" level=error msg="encountered an error cleaning up failed sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.236285 containerd[1563]: time="2025-01-13T20:46:13.236164401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.236671 kubelet[2805]: E0113 20:46:13.236634 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.236742 kubelet[2805]: E0113 20:46:13.236710 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:13.236742 kubelet[2805]: E0113 20:46:13.236742 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:13.237266 kubelet[2805]: E0113 20:46:13.237238 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664ddbb6d-trmvh_calico-apiserver(79a4d693-fc20-457c-9d67-0b17b1742b23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664ddbb6d-trmvh_calico-apiserver(79a4d693-fc20-457c-9d67-0b17b1742b23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" podUID="79a4d693-fc20-457c-9d67-0b17b1742b23" Jan 13 20:46:13.253459 containerd[1563]: time="2025-01-13T20:46:13.253007670Z" level=error msg="Failed to destroy network for sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.253666 containerd[1563]: time="2025-01-13T20:46:13.253553094Z" level=error msg="encountered an error cleaning up failed sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.253666 containerd[1563]: time="2025-01-13T20:46:13.253632255Z" level=error msg="Failed to destroy network for sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.253779 containerd[1563]: time="2025-01-13T20:46:13.253692757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.254128 containerd[1563]: time="2025-01-13T20:46:13.254036152Z" level=error msg="encountered an error cleaning up failed sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.254128 containerd[1563]: time="2025-01-13T20:46:13.254098850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wkvn,Uid:31e31ef7-4073-4fa6-8a57-6102fb32a16b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.254396 kubelet[2805]: E0113 20:46:13.254129 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.254396 kubelet[2805]: E0113 20:46:13.254197 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:13.254396 kubelet[2805]: E0113 20:46:13.254232 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:13.254561 kubelet[2805]: E0113 20:46:13.254306 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dfc47ddd6-rhff2_calico-system(25c9c48c-a9ce-4e21-b742-444ac830dfed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dfc47ddd6-rhff2_calico-system(25c9c48c-a9ce-4e21-b742-444ac830dfed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" podUID="25c9c48c-a9ce-4e21-b742-444ac830dfed" Jan 13 20:46:13.255056 kubelet[2805]: E0113 20:46:13.254705 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:13.255056 kubelet[2805]: E0113 20:46:13.254814 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:46:13.255056 kubelet[2805]: E0113 20:46:13.254892 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:46:13.255272 kubelet[2805]: E0113 20:46:13.255019 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2wkvn_calico-system(31e31ef7-4073-4fa6-8a57-6102fb32a16b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2wkvn_calico-system(31e31ef7-4073-4fa6-8a57-6102fb32a16b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:46:13.549635 kubelet[2805]: I0113 20:46:13.547927 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0" Jan 13 20:46:13.549773 containerd[1563]: time="2025-01-13T20:46:13.548838713Z" level=info msg="StopPodSandbox for \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\"" Jan 13 20:46:13.549773 containerd[1563]: time="2025-01-13T20:46:13.549074681Z" level=info msg="Ensure that sandbox 6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0 in task-service has been cleanup successfully" Jan 13 20:46:13.549773 containerd[1563]: time="2025-01-13T20:46:13.549343014Z" level=info msg="TearDown network for sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\" successfully" Jan 13 20:46:13.549773 containerd[1563]: time="2025-01-13T20:46:13.549358777Z" level=info msg="StopPodSandbox for \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\" returns successfully" Jan 13 20:46:13.549971 kubelet[2805]: I0113 20:46:13.549838 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db" Jan 13 20:46:13.550816 containerd[1563]: time="2025-01-13T20:46:13.550703070Z" level=info msg="StopPodSandbox for \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\"" Jan 13 20:46:13.550816 containerd[1563]: time="2025-01-13T20:46:13.550786368Z" level=info msg="TearDown network for sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\" successfully" Jan 13 20:46:13.550816 containerd[1563]: time="2025-01-13T20:46:13.550796308Z" level=info msg="StopPodSandbox for \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\" returns successfully" Jan 13 20:46:13.551779 containerd[1563]: time="2025-01-13T20:46:13.551751403Z" level=info msg="StopPodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\"" Jan 13 20:46:13.551845 containerd[1563]: time="2025-01-13T20:46:13.551828499Z" level=info msg="TearDown network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" successfully" Jan 13 20:46:13.551845 containerd[1563]: time="2025-01-13T20:46:13.551842919Z" level=info msg="StopPodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" returns successfully" Jan 13 20:46:13.551914 containerd[1563]: time="2025-01-13T20:46:13.551901587Z" level=info msg="StopPodSandbox for \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\"" Jan 13 20:46:13.552042 containerd[1563]: time="2025-01-13T20:46:13.552025979Z" level=info msg="Ensure that sandbox b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db in task-service has been cleanup successfully" Jan 13 20:46:13.552762 containerd[1563]: time="2025-01-13T20:46:13.552715516Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\"" Jan 13 20:46:13.552826 containerd[1563]: time="2025-01-13T20:46:13.552790137Z" level=info msg="TearDown network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" successfully" Jan 13 20:46:13.552826 containerd[1563]: time="2025-01-13T20:46:13.552799436Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" returns successfully" Jan 13 20:46:13.553127 containerd[1563]: time="2025-01-13T20:46:13.552948278Z" level=info msg="TearDown network for sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\" successfully" Jan 13 20:46:13.553127 containerd[1563]: time="2025-01-13T20:46:13.552964460Z" level=info msg="StopPodSandbox for \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\" returns successfully" Jan 13 20:46:13.553743 containerd[1563]: time="2025-01-13T20:46:13.553613295Z" level=info msg="StopPodSandbox for \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\"" Jan 13 20:46:13.553743 containerd[1563]: time="2025-01-13T20:46:13.553686122Z" level=info msg="TearDown network for sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\" successfully" Jan 13 20:46:13.553743 containerd[1563]: time="2025-01-13T20:46:13.553694238Z" level=info msg="StopPodSandbox for \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\" returns successfully" Jan 13 20:46:13.554658 containerd[1563]: time="2025-01-13T20:46:13.554067766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:4,}" Jan 13 20:46:13.554658 containerd[1563]: time="2025-01-13T20:46:13.554187348Z" level=info msg="StopPodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\"" Jan 13 20:46:13.554658 containerd[1563]: time="2025-01-13T20:46:13.554505653Z" level=info msg="TearDown network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" successfully" Jan 13 20:46:13.554658 containerd[1563]: time="2025-01-13T20:46:13.554519982Z" level=info msg="StopPodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" returns successfully" Jan 13 20:46:13.555281 containerd[1563]: time="2025-01-13T20:46:13.555254769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wkvn,Uid:31e31ef7-4073-4fa6-8a57-6102fb32a16b,Namespace:calico-system,Attempt:3,}" Jan 13 20:46:13.555541 kubelet[2805]: I0113 20:46:13.555514 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479" Jan 13 20:46:13.556405 containerd[1563]: time="2025-01-13T20:46:13.556100583Z" level=info msg="StopPodSandbox for \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\"" Jan 13 20:46:13.556537 containerd[1563]: time="2025-01-13T20:46:13.556512969Z" level=info msg="Ensure that sandbox 3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479 in task-service has been cleanup successfully" Jan 13 20:46:13.556852 containerd[1563]: time="2025-01-13T20:46:13.556715719Z" level=info msg="TearDown network for sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\" successfully" Jan 13 20:46:13.556852 containerd[1563]: time="2025-01-13T20:46:13.556731091Z" level=info msg="StopPodSandbox for \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\" returns successfully" Jan 13 20:46:13.557147 containerd[1563]: time="2025-01-13T20:46:13.557114146Z" level=info msg="StopPodSandbox for \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\"" Jan 13 20:46:13.557450 containerd[1563]: time="2025-01-13T20:46:13.557428543Z" level=info msg="TearDown network for sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\" successfully" Jan 13 20:46:13.557450 containerd[1563]: time="2025-01-13T20:46:13.557444165Z" level=info msg="StopPodSandbox for \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\" returns successfully" Jan 13 20:46:13.557785 containerd[1563]: time="2025-01-13T20:46:13.557758040Z" level=info msg="StopPodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\"" Jan 13 20:46:13.557876 containerd[1563]: time="2025-01-13T20:46:13.557856239Z" level=info msg="TearDown network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" successfully" Jan 13 20:46:13.557913 containerd[1563]: time="2025-01-13T20:46:13.557874927Z" level=info msg="StopPodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" returns successfully" Jan 13 20:46:13.558334 containerd[1563]: time="2025-01-13T20:46:13.558303725Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\"" Jan 13 20:46:13.558450 containerd[1563]: time="2025-01-13T20:46:13.558425723Z" level=info msg="TearDown network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" successfully" Jan 13 20:46:13.558450 containerd[1563]: time="2025-01-13T20:46:13.558445613Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" returns successfully" Jan 13 20:46:13.559116 kubelet[2805]: I0113 20:46:13.558713 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff" Jan 13 20:46:13.559315 containerd[1563]: time="2025-01-13T20:46:13.559287358Z" level=info msg="StopPodSandbox for \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\"" Jan 13 20:46:13.559584 containerd[1563]: time="2025-01-13T20:46:13.559538346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:46:13.561499 kubelet[2805]: I0113 20:46:13.561442 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5" Jan 13 20:46:13.563512 containerd[1563]: time="2025-01-13T20:46:13.563472024Z" level=info msg="Ensure that sandbox 0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff in task-service has been cleanup successfully" Jan 13 20:46:13.563715 containerd[1563]: time="2025-01-13T20:46:13.563688173Z" level=info msg="TearDown network for sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\" successfully" Jan 13 20:46:13.563715 containerd[1563]: time="2025-01-13T20:46:13.563711239Z" level=info msg="StopPodSandbox for \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\" returns successfully" Jan 13 20:46:13.564293 containerd[1563]: time="2025-01-13T20:46:13.563916785Z" level=info msg="StopPodSandbox for \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\"" Jan 13 20:46:13.564293 containerd[1563]: time="2025-01-13T20:46:13.564148194Z" level=info msg="Ensure that sandbox 2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5 in task-service has been cleanup successfully" Jan 13 20:46:13.564430 containerd[1563]: time="2025-01-13T20:46:13.564403282Z" level=info msg="StopPodSandbox for \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\"" Jan 13 20:46:13.564641 containerd[1563]: time="2025-01-13T20:46:13.564496791Z" level=info msg="TearDown network for sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\" successfully" Jan 13 20:46:13.564641 containerd[1563]: time="2025-01-13T20:46:13.564515910Z" level=info msg="StopPodSandbox for \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\" returns successfully" Jan 13 20:46:13.564641 containerd[1563]: time="2025-01-13T20:46:13.564406718Z" level=info msg="TearDown network for sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\" successfully" Jan 13 20:46:13.564641 containerd[1563]: time="2025-01-13T20:46:13.564557604Z" level=info msg="StopPodSandbox for \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\" returns successfully" Jan 13 20:46:13.564948 containerd[1563]: time="2025-01-13T20:46:13.564919336Z" level=info msg="StopPodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\"" Jan 13 20:46:13.565039 containerd[1563]: time="2025-01-13T20:46:13.565011072Z" level=info msg="TearDown network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" successfully" Jan 13 20:46:13.565039 containerd[1563]: time="2025-01-13T20:46:13.565029670Z" level=info msg="StopPodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" returns successfully" Jan 13 20:46:13.565300 containerd[1563]: time="2025-01-13T20:46:13.565168321Z" level=info msg="StopPodSandbox for \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\"" Jan 13 20:46:13.565300 containerd[1563]: time="2025-01-13T20:46:13.565250126Z" level=info msg="TearDown network for sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\" successfully" Jan 13 20:46:13.565300 containerd[1563]: time="2025-01-13T20:46:13.565260939Z" level=info msg="StopPodSandbox for \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\" returns successfully" Jan 13 20:46:13.565658 containerd[1563]: time="2025-01-13T20:46:13.565636489Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\"" Jan 13 20:46:13.565738 containerd[1563]: time="2025-01-13T20:46:13.565703845Z" level=info msg="StopPodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\"" Jan 13 20:46:13.565879 containerd[1563]: time="2025-01-13T20:46:13.565854741Z" level=info msg="TearDown network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" successfully" Jan 13 20:46:13.565935 containerd[1563]: time="2025-01-13T20:46:13.565876866Z" level=info msg="StopPodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" returns successfully" Jan 13 20:46:13.566197 kubelet[2805]: I0113 20:46:13.566173 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39" Jan 13 20:46:13.566270 containerd[1563]: time="2025-01-13T20:46:13.566219310Z" level=info msg="TearDown network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" successfully" Jan 13 20:46:13.566270 containerd[1563]: time="2025-01-13T20:46:13.566235883Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" returns successfully" Jan 13 20:46:13.567484 containerd[1563]: time="2025-01-13T20:46:13.567289928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:46:13.567484 containerd[1563]: time="2025-01-13T20:46:13.567329168Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\"" Jan 13 20:46:13.567484 containerd[1563]: time="2025-01-13T20:46:13.567422817Z" level=info msg="TearDown network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" successfully" Jan 13 20:46:13.567484 containerd[1563]: time="2025-01-13T20:46:13.567432598Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" returns successfully" Jan 13 20:46:13.567944 kubelet[2805]: E0113 20:46:13.567763 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:13.567988 containerd[1563]: time="2025-01-13T20:46:13.567777406Z" level=info msg="StopPodSandbox for \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\"" Jan 13 20:46:13.567988 containerd[1563]: time="2025-01-13T20:46:13.567939033Z" level=info msg="Ensure that sandbox cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39 in task-service has been cleanup successfully" Jan 13 20:46:13.568046 containerd[1563]: time="2025-01-13T20:46:13.568003534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:4,}" Jan 13 20:46:13.569486 containerd[1563]: time="2025-01-13T20:46:13.569453972Z" level=info msg="TearDown network for sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\" successfully" Jan 13 20:46:13.569601 containerd[1563]: time="2025-01-13T20:46:13.569484304Z" level=info msg="StopPodSandbox for \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\" returns successfully" Jan 13 20:46:13.569921 containerd[1563]: time="2025-01-13T20:46:13.569881057Z" level=info msg="StopPodSandbox for \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\"" Jan 13 20:46:13.570038 containerd[1563]: time="2025-01-13T20:46:13.570005840Z" level=info msg="TearDown network for sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\" successfully" Jan 13 20:46:13.570038 containerd[1563]: time="2025-01-13T20:46:13.570025500Z" level=info msg="StopPodSandbox for \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\" returns successfully" Jan 13 20:46:13.570350 containerd[1563]: time="2025-01-13T20:46:13.570325066Z" level=info msg="StopPodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\"" Jan 13 20:46:13.570481 containerd[1563]: time="2025-01-13T20:46:13.570454679Z" level=info msg="TearDown network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" successfully" Jan 13 20:46:13.570481 containerd[1563]: time="2025-01-13T20:46:13.570477285Z" level=info msg="StopPodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" returns successfully" Jan 13 20:46:13.570756 containerd[1563]: time="2025-01-13T20:46:13.570713093Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\"" Jan 13 20:46:13.570840 containerd[1563]: time="2025-01-13T20:46:13.570819188Z" level=info msg="TearDown network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" successfully" Jan 13 20:46:13.570881 containerd[1563]: time="2025-01-13T20:46:13.570840431Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" returns successfully" Jan 13 20:46:13.571040 kubelet[2805]: E0113 20:46:13.571022 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:13.571367 containerd[1563]: time="2025-01-13T20:46:13.571334882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:4,}" Jan 13 20:46:14.028496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39-shm.mount: Deactivated successfully. Jan 13 20:46:14.029259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5-shm.mount: Deactivated successfully. Jan 13 20:46:14.029461 systemd[1]: run-netns-cni\x2d2f206aff\x2d1a56\x2d9651\x2d5964\x2d0b6a56785e4b.mount: Deactivated successfully. Jan 13 20:46:14.029633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479-shm.mount: Deactivated successfully. Jan 13 20:46:14.029818 systemd[1]: run-netns-cni\x2d938b6974\x2dfb89\x2de768\x2dbd3f\x2d74e3b8c50ea3.mount: Deactivated successfully. Jan 13 20:46:14.029991 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0-shm.mount: Deactivated successfully. Jan 13 20:46:14.030121 systemd[1]: run-netns-cni\x2d2865e4ee\x2db295\x2d9323\x2da4e7\x2df0ee083b2eea.mount: Deactivated successfully. Jan 13 20:46:14.030250 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff-shm.mount: Deactivated successfully. Jan 13 20:46:15.230166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810735725.mount: Deactivated successfully. Jan 13 20:46:15.725176 containerd[1563]: time="2025-01-13T20:46:15.725110703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:15.733516 containerd[1563]: time="2025-01-13T20:46:15.733447499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:46:15.735093 containerd[1563]: time="2025-01-13T20:46:15.735060019Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:15.741250 containerd[1563]: time="2025-01-13T20:46:15.741213451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:15.742164 containerd[1563]: time="2025-01-13T20:46:15.741767250Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.272127461s" Jan 13 20:46:15.742277 containerd[1563]: time="2025-01-13T20:46:15.742261709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:46:15.756412 containerd[1563]: time="2025-01-13T20:46:15.755835264Z" level=info msg="CreateContainer within sandbox \"66a71c36a3a0d7e1ec3adcd026b9d7ae37c218f10c61200bb91cc41cc16124f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:46:15.777309 containerd[1563]: time="2025-01-13T20:46:15.777242937Z" level=info msg="CreateContainer within sandbox \"66a71c36a3a0d7e1ec3adcd026b9d7ae37c218f10c61200bb91cc41cc16124f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d5cbb2aa29546a995b6e29ff01b4aeefd8bb8f8f022ce7df93a1738ed075afef\"" Jan 13 20:46:15.780359 containerd[1563]: time="2025-01-13T20:46:15.778236316Z" level=info msg="StartContainer for \"d5cbb2aa29546a995b6e29ff01b4aeefd8bb8f8f022ce7df93a1738ed075afef\"" Jan 13 20:46:15.828225 containerd[1563]: time="2025-01-13T20:46:15.828078589Z" level=error msg="Failed to destroy network for sandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.828648 containerd[1563]: time="2025-01-13T20:46:15.828624473Z" level=error msg="encountered an error cleaning up failed sandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.829362 containerd[1563]: time="2025-01-13T20:46:15.828735117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.829468 kubelet[2805]: E0113 20:46:15.828977 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.829468 kubelet[2805]: E0113 20:46:15.829038 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:15.829468 kubelet[2805]: E0113 20:46:15.829060 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" Jan 13 20:46:15.829891 kubelet[2805]: E0113 20:46:15.829111 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dfc47ddd6-rhff2_calico-system(25c9c48c-a9ce-4e21-b742-444ac830dfed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dfc47ddd6-rhff2_calico-system(25c9c48c-a9ce-4e21-b742-444ac830dfed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" podUID="25c9c48c-a9ce-4e21-b742-444ac830dfed" Jan 13 20:46:15.838899 containerd[1563]: time="2025-01-13T20:46:15.837427592Z" level=error msg="Failed to destroy network for sandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.838899 containerd[1563]: time="2025-01-13T20:46:15.837980941Z" level=error msg="encountered an error cleaning up failed sandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.838899 containerd[1563]: time="2025-01-13T20:46:15.838040982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wkvn,Uid:31e31ef7-4073-4fa6-8a57-6102fb32a16b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.840731 kubelet[2805]: E0113 20:46:15.838772 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.840731 kubelet[2805]: E0113 20:46:15.838843 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:46:15.840731 kubelet[2805]: E0113 20:46:15.838868 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2wkvn" Jan 13 20:46:15.840843 containerd[1563]: time="2025-01-13T20:46:15.839680096Z" level=error msg="Failed to destroy network for sandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.840843 containerd[1563]: time="2025-01-13T20:46:15.840082960Z" level=error msg="encountered an error cleaning up failed sandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.840843 containerd[1563]: time="2025-01-13T20:46:15.840144815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.840931 kubelet[2805]: E0113 20:46:15.838930 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2wkvn_calico-system(31e31ef7-4073-4fa6-8a57-6102fb32a16b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2wkvn_calico-system(31e31ef7-4073-4fa6-8a57-6102fb32a16b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2wkvn" podUID="31e31ef7-4073-4fa6-8a57-6102fb32a16b" Jan 13 20:46:15.840931 kubelet[2805]: E0113 20:46:15.840359 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.840931 kubelet[2805]: E0113 20:46:15.840495 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:15.841063 kubelet[2805]: E0113 20:46:15.840520 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qqtwl" Jan 13 20:46:15.841063 kubelet[2805]: E0113 20:46:15.840571 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qqtwl_kube-system(fe5ec2be-450d-4cfd-bd06-ed7ea343b06b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qqtwl_kube-system(fe5ec2be-450d-4cfd-bd06-ed7ea343b06b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qqtwl" podUID="fe5ec2be-450d-4cfd-bd06-ed7ea343b06b" Jan 13 20:46:15.855448 containerd[1563]: time="2025-01-13T20:46:15.855388426Z" level=error msg="Failed to destroy network for sandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.855846 containerd[1563]: time="2025-01-13T20:46:15.855773494Z" level=error msg="encountered an error cleaning up failed sandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.855898 containerd[1563]: time="2025-01-13T20:46:15.855879299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.856180 kubelet[2805]: E0113 20:46:15.856141 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.856237 kubelet[2805]: E0113 20:46:15.856209 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:15.856237 kubelet[2805]: E0113 20:46:15.856236 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5dksv" Jan 13 20:46:15.856308 kubelet[2805]: E0113 20:46:15.856288 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5dksv_kube-system(0607ab24-d1bd-4a5b-b65b-ec7237434967)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5dksv_kube-system(0607ab24-d1bd-4a5b-b65b-ec7237434967)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5dksv" podUID="0607ab24-d1bd-4a5b-b65b-ec7237434967" Jan 13 20:46:15.858314 containerd[1563]: time="2025-01-13T20:46:15.858265503Z" level=error msg="Failed to destroy network for sandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.858603 containerd[1563]: time="2025-01-13T20:46:15.858580520Z" level=error msg="encountered an error cleaning up failed sandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.858658 containerd[1563]: time="2025-01-13T20:46:15.858620250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.858820 kubelet[2805]: E0113 20:46:15.858804 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.858851 kubelet[2805]: E0113 20:46:15.858833 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:15.858851 kubelet[2805]: E0113 20:46:15.858850 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" Jan 13 20:46:15.858927 kubelet[2805]: E0113 20:46:15.858886 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664ddbb6d-trmvh_calico-apiserver(79a4d693-fc20-457c-9d67-0b17b1742b23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664ddbb6d-trmvh_calico-apiserver(79a4d693-fc20-457c-9d67-0b17b1742b23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" podUID="79a4d693-fc20-457c-9d67-0b17b1742b23" Jan 13 20:46:15.870502 containerd[1563]: time="2025-01-13T20:46:15.869673658Z" level=error msg="Failed to destroy network for sandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.870502 containerd[1563]: time="2025-01-13T20:46:15.870076724Z" level=error msg="encountered an error cleaning up failed sandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.870502 containerd[1563]: time="2025-01-13T20:46:15.870144260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.870722 kubelet[2805]: E0113 20:46:15.870318 2805 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:46:15.870722 kubelet[2805]: E0113 20:46:15.870365 2805 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:15.870722 kubelet[2805]: E0113 20:46:15.870400 2805 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" Jan 13 20:46:15.870817 kubelet[2805]: E0113 20:46:15.870456 2805 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5664ddbb6d-jj75b_calico-apiserver(75c72d4a-2713-4d59-9a08-2119aee5935e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5664ddbb6d-jj75b_calico-apiserver(75c72d4a-2713-4d59-9a08-2119aee5935e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" podUID="75c72d4a-2713-4d59-9a08-2119aee5935e" Jan 13 20:46:16.002257 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:46:16.003065 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:46:16.051550 containerd[1563]: time="2025-01-13T20:46:16.051493075Z" level=info msg="StartContainer for \"d5cbb2aa29546a995b6e29ff01b4aeefd8bb8f8f022ce7df93a1738ed075afef\" returns successfully" Jan 13 20:46:16.233978 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4-shm.mount: Deactivated successfully. Jan 13 20:46:16.234205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d-shm.mount: Deactivated successfully. Jan 13 20:46:16.234418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31-shm.mount: Deactivated successfully. Jan 13 20:46:16.234610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797-shm.mount: Deactivated successfully. Jan 13 20:46:16.573139 kubelet[2805]: I0113 20:46:16.573110 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31" Jan 13 20:46:16.573808 containerd[1563]: time="2025-01-13T20:46:16.573764168Z" level=info msg="StopPodSandbox for \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\"" Jan 13 20:46:16.574034 containerd[1563]: time="2025-01-13T20:46:16.574001598Z" level=info msg="Ensure that sandbox 2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31 in task-service has been cleanup successfully" Jan 13 20:46:16.575096 kubelet[2805]: I0113 20:46:16.575072 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8" Jan 13 20:46:16.575333 containerd[1563]: time="2025-01-13T20:46:16.575309841Z" level=info msg="TearDown network for sandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\" successfully" Jan 13 20:46:16.575440 containerd[1563]: time="2025-01-13T20:46:16.575420815Z" level=info msg="StopPodSandbox for \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\" returns successfully" Jan 13 20:46:16.576217 containerd[1563]: time="2025-01-13T20:46:16.575937138Z" level=info msg="StopPodSandbox for \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\"" Jan 13 20:46:16.576217 containerd[1563]: time="2025-01-13T20:46:16.576094917Z" level=info msg="Ensure that sandbox fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8 in task-service has been cleanup successfully" Jan 13 20:46:16.576438 containerd[1563]: time="2025-01-13T20:46:16.576365713Z" level=info msg="TearDown network for sandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\" successfully" Jan 13 20:46:16.576438 containerd[1563]: time="2025-01-13T20:46:16.576392949Z" level=info msg="StopPodSandbox for \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\" returns successfully" Jan 13 20:46:16.576571 containerd[1563]: time="2025-01-13T20:46:16.576504133Z" level=info msg="StopPodSandbox for \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\"" Jan 13 20:46:16.576616 containerd[1563]: time="2025-01-13T20:46:16.576595638Z" level=info msg="TearDown network for sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\" successfully" Jan 13 20:46:16.576641 containerd[1563]: time="2025-01-13T20:46:16.576616791Z" level=info msg="StopPodSandbox for \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\" returns successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.576989304Z" level=info msg="StopPodSandbox for \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\"" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577068984Z" level=info msg="TearDown network for sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\" successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577079486Z" level=info msg="StopPodSandbox for \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\" returns successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577115108Z" level=info msg="StopPodSandbox for \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\"" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577175199Z" level=info msg="TearDown network for sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\" successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577183446Z" level=info msg="StopPodSandbox for \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\" returns successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577399692Z" level=info msg="StopPodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\"" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577461698Z" level=info msg="TearDown network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577470886Z" level=info msg="StopPodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" returns successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577511788Z" level=info msg="StopPodSandbox for \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\"" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577570437Z" level=info msg="TearDown network for sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\" successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577578594Z" level=info msg="StopPodSandbox for \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\" returns successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577755341Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\"" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577834901Z" level=info msg="TearDown network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577847086Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" returns successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577893330Z" level=info msg="StopPodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\"" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577959763Z" level=info msg="TearDown network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" successfully" Jan 13 20:46:16.578139 containerd[1563]: time="2025-01-13T20:46:16.577975496Z" level=info msg="StopPodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" returns successfully" Jan 13 20:46:16.578638 kubelet[2805]: E0113 20:46:16.578168 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.578819 containerd[1563]: time="2025-01-13T20:46:16.578801133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:5,}" Jan 13 20:46:16.579178 containerd[1563]: time="2025-01-13T20:46:16.579151391Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\"" Jan 13 20:46:16.579272 containerd[1563]: time="2025-01-13T20:46:16.579260451Z" level=info msg="TearDown network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" successfully" Jan 13 20:46:16.579303 containerd[1563]: time="2025-01-13T20:46:16.579274249Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" returns successfully" Jan 13 20:46:16.579425 systemd[1]: run-netns-cni\x2d18bb9d22\x2de69c\x2d706c\x2d9aa7\x2d00e1dfec2eee.mount: Deactivated successfully. Jan 13 20:46:16.579807 systemd[1]: run-netns-cni\x2d7257c5c9\x2de71e\x2dcd65\x2d35bf\x2dbe9b9a7b941a.mount: Deactivated successfully. Jan 13 20:46:16.579925 kubelet[2805]: E0113 20:46:16.579909 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.580516 containerd[1563]: time="2025-01-13T20:46:16.580140800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:5,}" Jan 13 20:46:16.581610 kubelet[2805]: E0113 20:46:16.581591 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:16.585513 kubelet[2805]: I0113 20:46:16.585490 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4" Jan 13 20:46:16.586391 containerd[1563]: time="2025-01-13T20:46:16.586350033Z" level=info msg="StopPodSandbox for \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\"" Jan 13 20:46:16.586624 containerd[1563]: time="2025-01-13T20:46:16.586597503Z" level=info msg="Ensure that sandbox caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4 in task-service has been cleanup successfully" Jan 13 20:46:16.586842 containerd[1563]: time="2025-01-13T20:46:16.586821526Z" level=info msg="TearDown network for sandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\" successfully" Jan 13 20:46:16.586931 containerd[1563]: time="2025-01-13T20:46:16.586911678Z" level=info msg="StopPodSandbox for \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\" returns successfully" Jan 13 20:46:16.587534 containerd[1563]: time="2025-01-13T20:46:16.587511951Z" level=info msg="StopPodSandbox for \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\"" Jan 13 20:46:16.591162 systemd[1]: run-netns-cni\x2d2c7fe10d\x2d241e\x2d297e\x2d3089\x2d88990cbbb434.mount: Deactivated successfully. Jan 13 20:46:16.592130 kubelet[2805]: I0113 20:46:16.591973 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954" Jan 13 20:46:16.592506 containerd[1563]: time="2025-01-13T20:46:16.592480188Z" level=info msg="StopPodSandbox for \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\"" Jan 13 20:46:16.592691 containerd[1563]: time="2025-01-13T20:46:16.592673408Z" level=info msg="Ensure that sandbox cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954 in task-service has been cleanup successfully" Jan 13 20:46:16.595449 containerd[1563]: time="2025-01-13T20:46:16.595427611Z" level=info msg="TearDown network for sandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\" successfully" Jan 13 20:46:16.595515 containerd[1563]: time="2025-01-13T20:46:16.595502422Z" level=info msg="StopPodSandbox for \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\" returns successfully" Jan 13 20:46:16.596035 containerd[1563]: time="2025-01-13T20:46:16.596017754Z" level=info msg="StopPodSandbox for \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\"" Jan 13 20:46:16.596171 containerd[1563]: time="2025-01-13T20:46:16.596157726Z" level=info msg="TearDown network for sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\" successfully" Jan 13 20:46:16.597245 containerd[1563]: time="2025-01-13T20:46:16.596214641Z" level=info msg="StopPodSandbox for \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\" returns successfully" Jan 13 20:46:16.597467 containerd[1563]: time="2025-01-13T20:46:16.597439115Z" level=info msg="StopPodSandbox for \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\"" Jan 13 20:46:16.597567 containerd[1563]: time="2025-01-13T20:46:16.597545089Z" level=info msg="TearDown network for sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\" successfully" Jan 13 20:46:16.597567 containerd[1563]: time="2025-01-13T20:46:16.597561011Z" level=info msg="StopPodSandbox for \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\" returns successfully" Jan 13 20:46:16.598703 systemd[1]: run-netns-cni\x2dd89e757e\x2dd524\x2dda87\x2d3623\x2da6b9cd3bf9a9.mount: Deactivated successfully. Jan 13 20:46:16.670781 containerd[1563]: time="2025-01-13T20:46:16.670728660Z" level=info msg="TearDown network for sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\" successfully" Jan 13 20:46:16.671409 containerd[1563]: time="2025-01-13T20:46:16.670955468Z" level=info msg="StopPodSandbox for \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\" returns successfully" Jan 13 20:46:16.672337 containerd[1563]: time="2025-01-13T20:46:16.672311107Z" level=info msg="StopPodSandbox for \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\"" Jan 13 20:46:16.672621 containerd[1563]: time="2025-01-13T20:46:16.672585191Z" level=info msg="TearDown network for sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\" successfully" Jan 13 20:46:16.672727 containerd[1563]: time="2025-01-13T20:46:16.672710254Z" level=info msg="StopPodSandbox for \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\" returns successfully" Jan 13 20:46:16.672891 containerd[1563]: time="2025-01-13T20:46:16.672873544Z" level=info msg="StopPodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\"" Jan 13 20:46:16.673091 containerd[1563]: time="2025-01-13T20:46:16.673074189Z" level=info msg="TearDown network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" successfully" Jan 13 20:46:16.673200 containerd[1563]: time="2025-01-13T20:46:16.673180133Z" level=info msg="StopPodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" returns successfully" Jan 13 20:46:16.673915 containerd[1563]: time="2025-01-13T20:46:16.673892652Z" level=info msg="StopPodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\"" Jan 13 20:46:16.674244 containerd[1563]: time="2025-01-13T20:46:16.674131625Z" level=info msg="TearDown network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" successfully" Jan 13 20:46:16.674244 containerd[1563]: time="2025-01-13T20:46:16.674176256Z" level=info msg="StopPodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" returns successfully" Jan 13 20:46:16.674434 containerd[1563]: time="2025-01-13T20:46:16.674415979Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\"" Jan 13 20:46:16.674651 containerd[1563]: time="2025-01-13T20:46:16.674614360Z" level=info msg="TearDown network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" successfully" Jan 13 20:46:16.674778 containerd[1563]: time="2025-01-13T20:46:16.674712198Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" returns successfully" Jan 13 20:46:16.676464 containerd[1563]: time="2025-01-13T20:46:16.676430208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:46:16.678247 kubelet[2805]: I0113 20:46:16.678164 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797" Jan 13 20:46:16.679695 containerd[1563]: time="2025-01-13T20:46:16.679464729Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\"" Jan 13 20:46:16.680684 containerd[1563]: time="2025-01-13T20:46:16.679788643Z" level=info msg="TearDown network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" successfully" Jan 13 20:46:16.680684 containerd[1563]: time="2025-01-13T20:46:16.679814966Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" returns successfully" Jan 13 20:46:16.680684 containerd[1563]: time="2025-01-13T20:46:16.679988176Z" level=info msg="StopPodSandbox for \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\"" Jan 13 20:46:16.680684 containerd[1563]: time="2025-01-13T20:46:16.680303392Z" level=info msg="Ensure that sandbox b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797 in task-service has been cleanup successfully" Jan 13 20:46:16.680684 containerd[1563]: time="2025-01-13T20:46:16.680584951Z" level=info msg="TearDown network for sandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\" successfully" Jan 13 20:46:16.680684 containerd[1563]: time="2025-01-13T20:46:16.680603269Z" level=info msg="StopPodSandbox for \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\" returns successfully" Jan 13 20:46:16.680851 containerd[1563]: time="2025-01-13T20:46:16.680796369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:46:16.681076 containerd[1563]: time="2025-01-13T20:46:16.681039279Z" level=info msg="StopPodSandbox for \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\"" Jan 13 20:46:16.681311 containerd[1563]: time="2025-01-13T20:46:16.681126406Z" level=info msg="TearDown network for sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\" successfully" Jan 13 20:46:16.681311 containerd[1563]: time="2025-01-13T20:46:16.681148430Z" level=info msg="StopPodSandbox for \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\" returns successfully" Jan 13 20:46:16.683005 containerd[1563]: time="2025-01-13T20:46:16.682982766Z" level=info msg="StopPodSandbox for \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\"" Jan 13 20:46:16.683241 containerd[1563]: time="2025-01-13T20:46:16.683209654Z" level=info msg="TearDown network for sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\" successfully" Jan 13 20:46:16.683322 containerd[1563]: time="2025-01-13T20:46:16.683309686Z" level=info msg="StopPodSandbox for \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\" returns successfully" Jan 13 20:46:16.683923 containerd[1563]: time="2025-01-13T20:46:16.683904919Z" level=info msg="StopPodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\"" Jan 13 20:46:16.684095 containerd[1563]: time="2025-01-13T20:46:16.684080713Z" level=info msg="TearDown network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" successfully" Jan 13 20:46:16.684426 containerd[1563]: time="2025-01-13T20:46:16.684139362Z" level=info msg="StopPodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" returns successfully" Jan 13 20:46:16.684761 containerd[1563]: time="2025-01-13T20:46:16.684731829Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\"" Jan 13 20:46:16.684808 kubelet[2805]: I0113 20:46:16.684785 2805 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d" Jan 13 20:46:16.684866 containerd[1563]: time="2025-01-13T20:46:16.684842933Z" level=info msg="TearDown network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" successfully" Jan 13 20:46:16.684893 containerd[1563]: time="2025-01-13T20:46:16.684864297Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" returns successfully" Jan 13 20:46:16.685294 containerd[1563]: time="2025-01-13T20:46:16.685252711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:5,}" Jan 13 20:46:16.685739 containerd[1563]: time="2025-01-13T20:46:16.685716048Z" level=info msg="StopPodSandbox for \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\"" Jan 13 20:46:16.686035 containerd[1563]: time="2025-01-13T20:46:16.686015061Z" level=info msg="Ensure that sandbox 9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d in task-service has been cleanup successfully" Jan 13 20:46:16.686422 containerd[1563]: time="2025-01-13T20:46:16.686277662Z" level=info msg="TearDown network for sandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\" successfully" Jan 13 20:46:16.686422 containerd[1563]: time="2025-01-13T20:46:16.686304787Z" level=info msg="StopPodSandbox for \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\" returns successfully" Jan 13 20:46:16.686599 containerd[1563]: time="2025-01-13T20:46:16.686574000Z" level=info msg="StopPodSandbox for \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\"" Jan 13 20:46:16.686715 containerd[1563]: time="2025-01-13T20:46:16.686694443Z" level=info msg="TearDown network for sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\" successfully" Jan 13 20:46:16.686752 containerd[1563]: time="2025-01-13T20:46:16.686714594Z" level=info msg="StopPodSandbox for \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\" returns successfully" Jan 13 20:46:16.687150 containerd[1563]: time="2025-01-13T20:46:16.686995221Z" level=info msg="StopPodSandbox for \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\"" Jan 13 20:46:16.687150 containerd[1563]: time="2025-01-13T20:46:16.687084021Z" level=info msg="TearDown network for sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\" successfully" Jan 13 20:46:16.687150 containerd[1563]: time="2025-01-13T20:46:16.687093910Z" level=info msg="StopPodSandbox for \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\" returns successfully" Jan 13 20:46:16.692345 containerd[1563]: time="2025-01-13T20:46:16.689780899Z" level=info msg="StopPodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\"" Jan 13 20:46:16.692544 containerd[1563]: time="2025-01-13T20:46:16.692485151Z" level=info msg="TearDown network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" successfully" Jan 13 20:46:16.692544 containerd[1563]: time="2025-01-13T20:46:16.692529461Z" level=info msg="StopPodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" returns successfully" Jan 13 20:46:16.692887 containerd[1563]: time="2025-01-13T20:46:16.692864327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wkvn,Uid:31e31ef7-4073-4fa6-8a57-6102fb32a16b,Namespace:calico-system,Attempt:4,}" Jan 13 20:46:16.782639 systemd[1]: Started sshd@10-10.0.0.149:22-10.0.0.1:53184.service - OpenSSH per-connection server daemon (10.0.0.1:53184). Jan 13 20:46:17.213700 sshd[4790]: Accepted publickey for core from 10.0.0.1 port 53184 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:17.216755 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:17.220982 systemd-logind[1542]: New session 11 of user core. Jan 13 20:46:17.227886 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:46:17.231243 systemd[1]: run-netns-cni\x2dfe744e0b\x2d618c\x2dd32d\x2dc7b6\x2d71a0bbf37324.mount: Deactivated successfully. Jan 13 20:46:17.231458 systemd[1]: run-netns-cni\x2d14d14929\x2de8bf\x2d50d7\x2d5180\x2d4ffe9aa6489a.mount: Deactivated successfully. Jan 13 20:46:17.688081 kubelet[2805]: E0113 20:46:17.688043 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:17.713277 systemd[1]: run-containerd-runc-k8s.io-d5cbb2aa29546a995b6e29ff01b4aeefd8bb8f8f022ce7df93a1738ed075afef-runc.uYWBbH.mount: Deactivated successfully. Jan 13 20:46:17.783322 sshd[4793]: Connection closed by 10.0.0.1 port 53184 Jan 13 20:46:17.783771 sshd-session[4790]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:17.791651 systemd[1]: Started sshd@11-10.0.0.149:22-10.0.0.1:53190.service - OpenSSH per-connection server daemon (10.0.0.1:53190). Jan 13 20:46:17.792140 systemd[1]: sshd@10-10.0.0.149:22-10.0.0.1:53184.service: Deactivated successfully. Jan 13 20:46:17.795320 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:46:17.795617 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:46:17.796747 systemd-logind[1542]: Removed session 11. Jan 13 20:46:17.823488 sshd[4840]: Accepted publickey for core from 10.0.0.1 port 53190 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:17.824837 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:17.828775 systemd-logind[1542]: New session 12 of user core. Jan 13 20:46:17.840652 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:46:18.786462 kernel: bpftool[4963]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:46:18.842898 sshd[4846]: Connection closed by 10.0.0.1 port 53190 Jan 13 20:46:18.850762 systemd[1]: Started sshd@12-10.0.0.149:22-10.0.0.1:53206.service - OpenSSH per-connection server daemon (10.0.0.1:53206). Jan 13 20:46:18.893211 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:18.897734 systemd[1]: sshd@11-10.0.0.149:22-10.0.0.1:53190.service: Deactivated successfully. Jan 13 20:46:18.901048 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:46:18.902459 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:46:18.903789 systemd-logind[1542]: Removed session 12. Jan 13 20:46:18.931712 sshd[4966]: Accepted publickey for core from 10.0.0.1 port 53206 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:18.933776 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:18.939408 systemd-logind[1542]: New session 13 of user core. Jan 13 20:46:18.952701 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:46:19.068694 systemd-networkd[1243]: vxlan.calico: Link UP Jan 13 20:46:19.068703 systemd-networkd[1243]: vxlan.calico: Gained carrier Jan 13 20:46:19.179676 sshd[4989]: Connection closed by 10.0.0.1 port 53206 Jan 13 20:46:19.180097 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:19.186256 systemd[1]: sshd@12-10.0.0.149:22-10.0.0.1:53206.service: Deactivated successfully. Jan 13 20:46:19.186655 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:46:19.189459 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:46:19.190631 systemd-logind[1542]: Removed session 13. Jan 13 20:46:20.558502 systemd-networkd[1243]: vxlan.calico: Gained IPv6LL Jan 13 20:46:21.517088 systemd-networkd[1243]: cali1c2a9a3727f: Link UP Jan 13 20:46:21.517352 systemd-networkd[1243]: cali1c2a9a3727f: Gained carrier Jan 13 20:46:21.610658 kubelet[2805]: I0113 20:46:21.607905 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-qwhcl" podStartSLOduration=6.449140829 podStartE2EDuration="29.607809949s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:45:52.584212723 +0000 UTC m=+22.303060670" lastFinishedPulling="2025-01-13 20:46:15.742881853 +0000 UTC m=+45.461729790" observedRunningTime="2025-01-13 20:46:16.901342574 +0000 UTC m=+46.620190521" watchObservedRunningTime="2025-01-13 20:46:21.607809949 +0000 UTC m=+51.326657896" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.219 [INFO][5075] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0 calico-apiserver-5664ddbb6d- calico-apiserver 75c72d4a-2713-4d59-9a08-2119aee5935e 769 0 2025-01-13 20:45:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5664ddbb6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5664ddbb6d-jj75b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1c2a9a3727f [] []}} ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-jj75b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.220 [INFO][5075] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-jj75b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.359 [INFO][5088] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" HandleID="k8s-pod-network.ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Workload="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.402 [INFO][5088] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" HandleID="k8s-pod-network.ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Workload="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035db40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5664ddbb6d-jj75b", "timestamp":"2025-01-13 20:46:21.359006862 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.402 [INFO][5088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.403 [INFO][5088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.403 [INFO][5088] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.408 [INFO][5088] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" host="localhost" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.423 [INFO][5088] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.427 [INFO][5088] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.429 [INFO][5088] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.431 [INFO][5088] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.431 [INFO][5088] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" host="localhost" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.432 [INFO][5088] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576 Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.474 [INFO][5088] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" host="localhost" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.501 [INFO][5088] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" host="localhost" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.501 [INFO][5088] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" host="localhost" Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.501 [INFO][5088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:21.614056 containerd[1563]: 2025-01-13 20:46:21.501 [INFO][5088] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" HandleID="k8s-pod-network.ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Workload="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0" Jan 13 20:46:21.615013 containerd[1563]: 2025-01-13 20:46:21.507 [INFO][5075] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-jj75b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0", GenerateName:"calico-apiserver-5664ddbb6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"75c72d4a-2713-4d59-9a08-2119aee5935e", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664ddbb6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5664ddbb6d-jj75b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c2a9a3727f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:21.615013 containerd[1563]: 2025-01-13 20:46:21.507 [INFO][5075] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-jj75b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0" Jan 13 20:46:21.615013 containerd[1563]: 2025-01-13 20:46:21.507 [INFO][5075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c2a9a3727f ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-jj75b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0" Jan 13 20:46:21.615013 containerd[1563]: 2025-01-13 20:46:21.517 [INFO][5075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-jj75b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0" Jan 13 20:46:21.615013 containerd[1563]: 2025-01-13 20:46:21.517 [INFO][5075] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-jj75b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0", GenerateName:"calico-apiserver-5664ddbb6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"75c72d4a-2713-4d59-9a08-2119aee5935e", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664ddbb6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576", Pod:"calico-apiserver-5664ddbb6d-jj75b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c2a9a3727f", MAC:"22:1e:90:4d:76:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:21.615013 containerd[1563]: 2025-01-13 20:46:21.607 [INFO][5075] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-jj75b" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--jj75b-eth0" Jan 13 20:46:21.701196 containerd[1563]: time="2025-01-13T20:46:21.701032983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:21.701196 containerd[1563]: time="2025-01-13T20:46:21.701101230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:21.701196 containerd[1563]: time="2025-01-13T20:46:21.701122062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:21.701996 containerd[1563]: time="2025-01-13T20:46:21.701923807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:21.735103 systemd-networkd[1243]: calid438e25ddc1: Link UP Jan 13 20:46:21.735743 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:21.736341 systemd-networkd[1243]: calid438e25ddc1: Gained carrier Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.345 [INFO][5093] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--5dksv-eth0 coredns-76f75df574- kube-system 0607ab24-d1bd-4a5b-b65b-ec7237434967 771 0 2025-01-13 20:45:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-5dksv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid438e25ddc1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Namespace="kube-system" Pod="coredns-76f75df574-5dksv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5dksv-" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.345 [INFO][5093] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Namespace="kube-system" Pod="coredns-76f75df574-5dksv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5dksv-eth0" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.405 [INFO][5107] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" HandleID="k8s-pod-network.b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Workload="localhost-k8s-coredns--76f75df574--5dksv-eth0" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.414 [INFO][5107] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" HandleID="k8s-pod-network.b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Workload="localhost-k8s-coredns--76f75df574--5dksv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df6a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-5dksv", "timestamp":"2025-01-13 20:46:21.405355355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.414 [INFO][5107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.501 [INFO][5107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.502 [INFO][5107] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.568 [INFO][5107] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" host="localhost" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.619 [INFO][5107] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.624 [INFO][5107] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.629 [INFO][5107] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.632 [INFO][5107] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.632 [INFO][5107] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" host="localhost" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.636 [INFO][5107] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0 Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.652 [INFO][5107] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" host="localhost" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.723 [INFO][5107] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" host="localhost" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.724 [INFO][5107] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" host="localhost" Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.724 [INFO][5107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:21.760486 containerd[1563]: 2025-01-13 20:46:21.724 [INFO][5107] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" HandleID="k8s-pod-network.b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Workload="localhost-k8s-coredns--76f75df574--5dksv-eth0" Jan 13 20:46:21.761295 containerd[1563]: 2025-01-13 20:46:21.728 [INFO][5093] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Namespace="kube-system" Pod="coredns-76f75df574-5dksv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5dksv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--5dksv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0607ab24-d1bd-4a5b-b65b-ec7237434967", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-5dksv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid438e25ddc1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:21.761295 containerd[1563]: 2025-01-13 20:46:21.729 [INFO][5093] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Namespace="kube-system" Pod="coredns-76f75df574-5dksv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5dksv-eth0" Jan 13 20:46:21.761295 containerd[1563]: 2025-01-13 20:46:21.729 [INFO][5093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid438e25ddc1 ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Namespace="kube-system" Pod="coredns-76f75df574-5dksv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5dksv-eth0" Jan 13 20:46:21.761295 containerd[1563]: 2025-01-13 20:46:21.736 [INFO][5093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Namespace="kube-system" Pod="coredns-76f75df574-5dksv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5dksv-eth0" Jan 13 20:46:21.761295 containerd[1563]: 2025-01-13 20:46:21.737 [INFO][5093] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Namespace="kube-system" Pod="coredns-76f75df574-5dksv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5dksv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--5dksv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0607ab24-d1bd-4a5b-b65b-ec7237434967", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0", Pod:"coredns-76f75df574-5dksv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid438e25ddc1", MAC:"8a:b1:ce:c4:40:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:21.761295 containerd[1563]: 2025-01-13 20:46:21.753 [INFO][5093] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0" Namespace="kube-system" Pod="coredns-76f75df574-5dksv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--5dksv-eth0" Jan 13 20:46:21.775877 containerd[1563]: time="2025-01-13T20:46:21.775775505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-jj75b,Uid:75c72d4a-2713-4d59-9a08-2119aee5935e,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576\"" Jan 13 20:46:21.778243 containerd[1563]: time="2025-01-13T20:46:21.778206720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:46:21.799913 containerd[1563]: time="2025-01-13T20:46:21.799798014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:21.799913 containerd[1563]: time="2025-01-13T20:46:21.799867424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:21.799913 containerd[1563]: time="2025-01-13T20:46:21.799881743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:21.800088 containerd[1563]: time="2025-01-13T20:46:21.800010863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:21.828676 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:21.853861 containerd[1563]: time="2025-01-13T20:46:21.853805264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dksv,Uid:0607ab24-d1bd-4a5b-b65b-ec7237434967,Namespace:kube-system,Attempt:5,} returns sandbox id \"b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0\"" Jan 13 20:46:21.854513 kubelet[2805]: E0113 20:46:21.854492 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:21.856444 containerd[1563]: time="2025-01-13T20:46:21.856418254Z" level=info msg="CreateContainer within sandbox \"b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:46:21.889444 systemd-networkd[1243]: cali1420473c4ea: Link UP Jan 13 20:46:21.889665 systemd-networkd[1243]: cali1420473c4ea: Gained carrier Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.476 [INFO][5130] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0 calico-apiserver-5664ddbb6d- calico-apiserver 79a4d693-fc20-457c-9d67-0b17b1742b23 768 0 2025-01-13 20:45:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5664ddbb6d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5664ddbb6d-trmvh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1420473c4ea [] []}} ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-trmvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.476 [INFO][5130] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-trmvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.538 [INFO][5175] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" HandleID="k8s-pod-network.346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Workload="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.620 [INFO][5175] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" HandleID="k8s-pod-network.346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Workload="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000504160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5664ddbb6d-trmvh", "timestamp":"2025-01-13 20:46:21.538140109 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.620 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.724 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.724 [INFO][5175] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.728 [INFO][5175] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" host="localhost" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.738 [INFO][5175] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.743 [INFO][5175] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.753 [INFO][5175] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.756 [INFO][5175] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.756 [INFO][5175] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" host="localhost" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.758 [INFO][5175] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220 Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.787 [INFO][5175] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" host="localhost" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.884 [INFO][5175] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" host="localhost" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.884 [INFO][5175] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" host="localhost" Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.884 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:21.916845 containerd[1563]: 2025-01-13 20:46:21.884 [INFO][5175] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" HandleID="k8s-pod-network.346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Workload="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0" Jan 13 20:46:21.917372 containerd[1563]: 2025-01-13 20:46:21.887 [INFO][5130] cni-plugin/k8s.go 386: Populated endpoint ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-trmvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0", GenerateName:"calico-apiserver-5664ddbb6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"79a4d693-fc20-457c-9d67-0b17b1742b23", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664ddbb6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5664ddbb6d-trmvh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1420473c4ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:21.917372 containerd[1563]: 2025-01-13 20:46:21.887 [INFO][5130] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-trmvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0" Jan 13 20:46:21.917372 containerd[1563]: 2025-01-13 20:46:21.887 [INFO][5130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1420473c4ea ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-trmvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0" Jan 13 20:46:21.917372 containerd[1563]: 2025-01-13 20:46:21.890 [INFO][5130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-trmvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0" Jan 13 20:46:21.917372 containerd[1563]: 2025-01-13 20:46:21.890 [INFO][5130] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-trmvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0", GenerateName:"calico-apiserver-5664ddbb6d-", Namespace:"calico-apiserver", SelfLink:"", UID:"79a4d693-fc20-457c-9d67-0b17b1742b23", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5664ddbb6d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220", Pod:"calico-apiserver-5664ddbb6d-trmvh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1420473c4ea", MAC:"5a:94:01:34:13:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:21.917372 containerd[1563]: 2025-01-13 20:46:21.912 [INFO][5130] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220" Namespace="calico-apiserver" Pod="calico-apiserver-5664ddbb6d-trmvh" WorkloadEndpoint="localhost-k8s-calico--apiserver--5664ddbb6d--trmvh-eth0" Jan 13 20:46:21.971603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907986892.mount: Deactivated successfully. Jan 13 20:46:21.974771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2160934113.mount: Deactivated successfully. Jan 13 20:46:21.986046 systemd-networkd[1243]: cali594bc42b9a0: Link UP Jan 13 20:46:21.986253 systemd-networkd[1243]: cali594bc42b9a0: Gained carrier Jan 13 20:46:21.997033 containerd[1563]: time="2025-01-13T20:46:21.996934332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:21.997756 containerd[1563]: time="2025-01-13T20:46:21.997687358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:21.997756 containerd[1563]: time="2025-01-13T20:46:21.997734915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:21.997978 containerd[1563]: time="2025-01-13T20:46:21.997916831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:22.030287 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:22.057167 containerd[1563]: time="2025-01-13T20:46:22.057122850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5664ddbb6d-trmvh,Uid:79a4d693-fc20-457c-9d67-0b17b1742b23,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220\"" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.610 [INFO][5156] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2wkvn-eth0 csi-node-driver- calico-system 31e31ef7-4073-4fa6-8a57-6102fb32a16b 607 0 2025-01-13 20:45:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2wkvn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali594bc42b9a0 [] []}} ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Namespace="calico-system" Pod="csi-node-driver-2wkvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2wkvn-" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.610 [INFO][5156] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Namespace="calico-system" Pod="csi-node-driver-2wkvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2wkvn-eth0" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.647 [INFO][5193] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" HandleID="k8s-pod-network.3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Workload="localhost-k8s-csi--node--driver--2wkvn-eth0" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.656 [INFO][5193] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" HandleID="k8s-pod-network.3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Workload="localhost-k8s-csi--node--driver--2wkvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a9e40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2wkvn", "timestamp":"2025-01-13 20:46:21.647426009 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.656 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.884 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.884 [INFO][5193] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.913 [INFO][5193] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" host="localhost" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.924 [INFO][5193] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.928 [INFO][5193] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.929 [INFO][5193] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.931 [INFO][5193] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.931 [INFO][5193] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" host="localhost" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.932 [INFO][5193] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.951 [INFO][5193] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" host="localhost" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.976 [INFO][5193] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" host="localhost" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.976 [INFO][5193] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" host="localhost" Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.976 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:22.079494 containerd[1563]: 2025-01-13 20:46:21.976 [INFO][5193] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" HandleID="k8s-pod-network.3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Workload="localhost-k8s-csi--node--driver--2wkvn-eth0" Jan 13 20:46:22.080042 containerd[1563]: 2025-01-13 20:46:21.981 [INFO][5156] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Namespace="calico-system" Pod="csi-node-driver-2wkvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2wkvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2wkvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31e31ef7-4073-4fa6-8a57-6102fb32a16b", ResourceVersion:"607", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2wkvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali594bc42b9a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:22.080042 containerd[1563]: 2025-01-13 20:46:21.981 [INFO][5156] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Namespace="calico-system" Pod="csi-node-driver-2wkvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2wkvn-eth0" Jan 13 20:46:22.080042 containerd[1563]: 2025-01-13 20:46:21.981 [INFO][5156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali594bc42b9a0 ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Namespace="calico-system" Pod="csi-node-driver-2wkvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2wkvn-eth0" Jan 13 20:46:22.080042 containerd[1563]: 2025-01-13 20:46:21.985 [INFO][5156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Namespace="calico-system" Pod="csi-node-driver-2wkvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2wkvn-eth0" Jan 13 20:46:22.080042 containerd[1563]: 2025-01-13 20:46:21.985 [INFO][5156] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Namespace="calico-system" Pod="csi-node-driver-2wkvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2wkvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2wkvn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31e31ef7-4073-4fa6-8a57-6102fb32a16b", ResourceVersion:"607", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d", Pod:"csi-node-driver-2wkvn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali594bc42b9a0", MAC:"a6:0a:e8:38:a1:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:22.080042 containerd[1563]: 2025-01-13 20:46:22.076 [INFO][5156] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d" Namespace="calico-system" Pod="csi-node-driver-2wkvn" WorkloadEndpoint="localhost-k8s-csi--node--driver--2wkvn-eth0" Jan 13 20:46:22.141554 containerd[1563]: time="2025-01-13T20:46:22.141505670Z" level=info msg="CreateContainer within sandbox \"b5fa38612e394a27cb8a59c8780b56f3a917ce9755a66008bb933ab3a82b6ab0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ce9b818a98b9d1740903b6415f9c7ad8d1293c8b57f0e1ad8e3422800fb15c2\"" Jan 13 20:46:22.142569 containerd[1563]: time="2025-01-13T20:46:22.142131560Z" level=info msg="StartContainer for \"5ce9b818a98b9d1740903b6415f9c7ad8d1293c8b57f0e1ad8e3422800fb15c2\"" Jan 13 20:46:22.260695 containerd[1563]: time="2025-01-13T20:46:22.260603444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:22.260695 containerd[1563]: time="2025-01-13T20:46:22.260656962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:22.260836 containerd[1563]: time="2025-01-13T20:46:22.260706741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:22.260836 containerd[1563]: time="2025-01-13T20:46:22.260795019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:22.279876 systemd-networkd[1243]: cali992bef884b6: Link UP Jan 13 20:46:22.282069 systemd-networkd[1243]: cali992bef884b6: Gained carrier Jan 13 20:46:22.295260 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:22.310135 containerd[1563]: time="2025-01-13T20:46:22.310095257Z" level=info msg="StartContainer for \"5ce9b818a98b9d1740903b6415f9c7ad8d1293c8b57f0e1ad8e3422800fb15c2\" returns successfully" Jan 13 20:46:22.312235 containerd[1563]: time="2025-01-13T20:46:22.312184340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2wkvn,Uid:31e31ef7-4073-4fa6-8a57-6102fb32a16b,Namespace:calico-system,Attempt:4,} returns sandbox id \"3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d\"" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:21.477 [INFO][5115] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--qqtwl-eth0 coredns-76f75df574- kube-system fe5ec2be-450d-4cfd-bd06-ed7ea343b06b 765 0 2025-01-13 20:45:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-qqtwl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali992bef884b6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Namespace="kube-system" Pod="coredns-76f75df574-qqtwl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qqtwl-" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:21.478 [INFO][5115] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Namespace="kube-system" Pod="coredns-76f75df574-qqtwl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qqtwl-eth0" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:21.659 [INFO][5190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" HandleID="k8s-pod-network.90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Workload="localhost-k8s-coredns--76f75df574--qqtwl-eth0" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:21.728 [INFO][5190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" HandleID="k8s-pod-network.90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Workload="localhost-k8s-coredns--76f75df574--qqtwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000630e10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-qqtwl", "timestamp":"2025-01-13 20:46:21.659841212 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:21.728 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:21.976 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:21.976 [INFO][5190] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:21.978 [INFO][5190] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" host="localhost" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:21.987 [INFO][5190] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:22.093 [INFO][5190] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:22.097 [INFO][5190] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:22.101 [INFO][5190] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:22.101 [INFO][5190] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" host="localhost" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:22.102 [INFO][5190] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:22.123 [INFO][5190] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" host="localhost" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:22.267 [INFO][5190] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" host="localhost" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:22.267 [INFO][5190] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" host="localhost" Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:22.267 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:22.323074 containerd[1563]: 2025-01-13 20:46:22.267 [INFO][5190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" HandleID="k8s-pod-network.90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Workload="localhost-k8s-coredns--76f75df574--qqtwl-eth0" Jan 13 20:46:22.323827 containerd[1563]: 2025-01-13 20:46:22.275 [INFO][5115] cni-plugin/k8s.go 386: Populated endpoint ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Namespace="kube-system" Pod="coredns-76f75df574-qqtwl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qqtwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qqtwl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fe5ec2be-450d-4cfd-bd06-ed7ea343b06b", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-qqtwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali992bef884b6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:22.323827 containerd[1563]: 2025-01-13 20:46:22.275 [INFO][5115] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Namespace="kube-system" Pod="coredns-76f75df574-qqtwl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qqtwl-eth0" Jan 13 20:46:22.323827 containerd[1563]: 2025-01-13 20:46:22.275 [INFO][5115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali992bef884b6 ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Namespace="kube-system" Pod="coredns-76f75df574-qqtwl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qqtwl-eth0" Jan 13 20:46:22.323827 containerd[1563]: 2025-01-13 20:46:22.279 [INFO][5115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Namespace="kube-system" Pod="coredns-76f75df574-qqtwl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qqtwl-eth0" Jan 13 20:46:22.323827 containerd[1563]: 2025-01-13 20:46:22.280 [INFO][5115] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Namespace="kube-system" Pod="coredns-76f75df574-qqtwl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qqtwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--qqtwl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fe5ec2be-450d-4cfd-bd06-ed7ea343b06b", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede", Pod:"coredns-76f75df574-qqtwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali992bef884b6", MAC:"6e:3c:bc:2e:ef:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:22.323827 containerd[1563]: 2025-01-13 20:46:22.319 [INFO][5115] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede" Namespace="kube-system" Pod="coredns-76f75df574-qqtwl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--qqtwl-eth0" Jan 13 20:46:22.457819 systemd-networkd[1243]: cali593714311e9: Link UP Jan 13 20:46:22.458429 systemd-networkd[1243]: cali593714311e9: Gained carrier Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:21.504 [INFO][5145] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0 calico-kube-controllers-dfc47ddd6- calico-system 25c9c48c-a9ce-4e21-b742-444ac830dfed 770 0 2025-01-13 20:45:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dfc47ddd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-dfc47ddd6-rhff2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali593714311e9 [] []}} ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Namespace="calico-system" Pod="calico-kube-controllers-dfc47ddd6-rhff2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:21.504 [INFO][5145] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Namespace="calico-system" Pod="calico-kube-controllers-dfc47ddd6-rhff2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:21.650 [INFO][5192] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" HandleID="k8s-pod-network.358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Workload="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:21.730 [INFO][5192] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" HandleID="k8s-pod-network.358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Workload="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dcd70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-dfc47ddd6-rhff2", "timestamp":"2025-01-13 20:46:21.650612674 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:21.730 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.267 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.267 [INFO][5192] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.271 [INFO][5192] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" host="localhost" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.276 [INFO][5192] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.286 [INFO][5192] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.291 [INFO][5192] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.321 [INFO][5192] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.321 [INFO][5192] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" host="localhost" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.341 [INFO][5192] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.442 [INFO][5192] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" host="localhost" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.451 [INFO][5192] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" host="localhost" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.451 [INFO][5192] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" host="localhost" Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.451 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:46:22.488640 containerd[1563]: 2025-01-13 20:46:22.451 [INFO][5192] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" HandleID="k8s-pod-network.358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Workload="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0" Jan 13 20:46:22.489957 containerd[1563]: 2025-01-13 20:46:22.454 [INFO][5145] cni-plugin/k8s.go 386: Populated endpoint ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Namespace="calico-system" Pod="calico-kube-controllers-dfc47ddd6-rhff2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0", GenerateName:"calico-kube-controllers-dfc47ddd6-", Namespace:"calico-system", SelfLink:"", UID:"25c9c48c-a9ce-4e21-b742-444ac830dfed", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dfc47ddd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-dfc47ddd6-rhff2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali593714311e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:22.489957 containerd[1563]: 2025-01-13 20:46:22.454 [INFO][5145] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Namespace="calico-system" Pod="calico-kube-controllers-dfc47ddd6-rhff2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0" Jan 13 20:46:22.489957 containerd[1563]: 2025-01-13 20:46:22.454 [INFO][5145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali593714311e9 ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Namespace="calico-system" Pod="calico-kube-controllers-dfc47ddd6-rhff2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0" Jan 13 20:46:22.489957 containerd[1563]: 2025-01-13 20:46:22.458 [INFO][5145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Namespace="calico-system" Pod="calico-kube-controllers-dfc47ddd6-rhff2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0" Jan 13 20:46:22.489957 containerd[1563]: 2025-01-13 20:46:22.458 [INFO][5145] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Namespace="calico-system" Pod="calico-kube-controllers-dfc47ddd6-rhff2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0", GenerateName:"calico-kube-controllers-dfc47ddd6-", Namespace:"calico-system", SelfLink:"", UID:"25c9c48c-a9ce-4e21-b742-444ac830dfed", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dfc47ddd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad", Pod:"calico-kube-controllers-dfc47ddd6-rhff2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali593714311e9", MAC:"82:62:18:49:ff:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:46:22.489957 containerd[1563]: 2025-01-13 20:46:22.485 [INFO][5145] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad" Namespace="calico-system" Pod="calico-kube-controllers-dfc47ddd6-rhff2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dfc47ddd6--rhff2-eth0" Jan 13 20:46:22.518081 containerd[1563]: time="2025-01-13T20:46:22.517754478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:22.518081 containerd[1563]: time="2025-01-13T20:46:22.517827725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:22.518081 containerd[1563]: time="2025-01-13T20:46:22.517842324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:22.518081 containerd[1563]: time="2025-01-13T20:46:22.517963988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:22.542157 containerd[1563]: time="2025-01-13T20:46:22.540761946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:46:22.542157 containerd[1563]: time="2025-01-13T20:46:22.541944415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:46:22.542157 containerd[1563]: time="2025-01-13T20:46:22.541967361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:22.543071 containerd[1563]: time="2025-01-13T20:46:22.542775808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:46:22.558114 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:22.570007 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:46:22.588484 containerd[1563]: time="2025-01-13T20:46:22.588420911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qqtwl,Uid:fe5ec2be-450d-4cfd-bd06-ed7ea343b06b,Namespace:kube-system,Attempt:5,} returns sandbox id \"90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede\"" Jan 13 20:46:22.589274 kubelet[2805]: E0113 20:46:22.589247 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:22.592249 containerd[1563]: time="2025-01-13T20:46:22.592182540Z" level=info msg="CreateContainer within sandbox \"90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:46:22.602717 containerd[1563]: time="2025-01-13T20:46:22.602668446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dfc47ddd6-rhff2,Uid:25c9c48c-a9ce-4e21-b742-444ac830dfed,Namespace:calico-system,Attempt:5,} returns sandbox id \"358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad\"" Jan 13 20:46:22.702841 kubelet[2805]: E0113 20:46:22.702812 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:22.862183 kubelet[2805]: I0113 20:46:22.861420 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5dksv" podStartSLOduration=38.861362794 podStartE2EDuration="38.861362794s" podCreationTimestamp="2025-01-13 20:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:22.861101388 +0000 UTC m=+52.579949336" watchObservedRunningTime="2025-01-13 20:46:22.861362794 +0000 UTC m=+52.580210741" Jan 13 20:46:22.887562 containerd[1563]: time="2025-01-13T20:46:22.887518066Z" level=info msg="CreateContainer within sandbox \"90230a11225f9dece1526eef7e3382d2d1ac81d8785ab564f641393052c05ede\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9beed6652a3502b4740b6b1d5f7a8b8a2a9e1e34ed54ade5f914c16342f8f879\"" Jan 13 20:46:22.888050 containerd[1563]: time="2025-01-13T20:46:22.888028564Z" level=info msg="StartContainer for \"9beed6652a3502b4740b6b1d5f7a8b8a2a9e1e34ed54ade5f914c16342f8f879\"" Jan 13 20:46:22.980467 containerd[1563]: time="2025-01-13T20:46:22.980375316Z" level=info msg="StartContainer for \"9beed6652a3502b4740b6b1d5f7a8b8a2a9e1e34ed54ade5f914c16342f8f879\" returns successfully" Jan 13 20:46:22.987534 systemd-networkd[1243]: cali1c2a9a3727f: Gained IPv6LL Jan 13 20:46:23.371557 systemd-networkd[1243]: cali1420473c4ea: Gained IPv6LL Jan 13 20:46:23.499536 systemd-networkd[1243]: calid438e25ddc1: Gained IPv6LL Jan 13 20:46:23.716358 kubelet[2805]: E0113 20:46:23.716334 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:23.716792 kubelet[2805]: E0113 20:46:23.716458 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:23.800820 kubelet[2805]: I0113 20:46:23.800770 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qqtwl" podStartSLOduration=39.800719691 podStartE2EDuration="39.800719691s" podCreationTimestamp="2025-01-13 20:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:46:23.800368483 +0000 UTC m=+53.519216440" watchObservedRunningTime="2025-01-13 20:46:23.800719691 +0000 UTC m=+53.519567638" Jan 13 20:46:23.947521 systemd-networkd[1243]: cali594bc42b9a0: Gained IPv6LL Jan 13 20:46:24.192900 systemd[1]: Started sshd@13-10.0.0.149:22-10.0.0.1:53220.service - OpenSSH per-connection server daemon (10.0.0.1:53220). Jan 13 20:46:24.268617 systemd-networkd[1243]: cali593714311e9: Gained IPv6LL Jan 13 20:46:24.331534 systemd-networkd[1243]: cali992bef884b6: Gained IPv6LL Jan 13 20:46:24.346900 sshd[5620]: Accepted publickey for core from 10.0.0.1 port 53220 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:24.348914 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:24.352731 systemd-logind[1542]: New session 14 of user core. Jan 13 20:46:24.361774 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:46:24.475509 sshd[5624]: Connection closed by 10.0.0.1 port 53220 Jan 13 20:46:24.475732 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:24.479556 systemd[1]: sshd@13-10.0.0.149:22-10.0.0.1:53220.service: Deactivated successfully. Jan 13 20:46:24.481986 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:46:24.482000 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:46:24.483146 systemd-logind[1542]: Removed session 14. Jan 13 20:46:24.718763 kubelet[2805]: E0113 20:46:24.718441 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:24.718763 kubelet[2805]: E0113 20:46:24.718572 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:25.720527 kubelet[2805]: E0113 20:46:25.720487 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:25.943823 containerd[1563]: time="2025-01-13T20:46:25.943747254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:25.953733 containerd[1563]: time="2025-01-13T20:46:25.953676559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 13 20:46:25.963707 containerd[1563]: time="2025-01-13T20:46:25.963668706Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:25.977533 containerd[1563]: time="2025-01-13T20:46:25.977356406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:25.977959 containerd[1563]: time="2025-01-13T20:46:25.977922095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 4.199473441s" Jan 13 20:46:25.977959 containerd[1563]: time="2025-01-13T20:46:25.977948803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:46:25.978569 containerd[1563]: time="2025-01-13T20:46:25.978541030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:46:25.980242 containerd[1563]: time="2025-01-13T20:46:25.980201753Z" level=info msg="CreateContainer within sandbox \"ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:46:26.450102 containerd[1563]: time="2025-01-13T20:46:26.450045613Z" level=info msg="CreateContainer within sandbox \"ca5bf9696ac1d11fb5b850724f36dbfd07329bec7e13049454eeb1e02d9d6576\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c513ecb1f83878d716531c1b13d344982c21da6f01a799cc857ec061293b6e25\"" Jan 13 20:46:26.450595 containerd[1563]: time="2025-01-13T20:46:26.450570302Z" level=info msg="StartContainer for \"c513ecb1f83878d716531c1b13d344982c21da6f01a799cc857ec061293b6e25\"" Jan 13 20:46:26.613031 containerd[1563]: time="2025-01-13T20:46:26.612988807Z" level=info msg="StartContainer for \"c513ecb1f83878d716531c1b13d344982c21da6f01a799cc857ec061293b6e25\" returns successfully" Jan 13 20:46:26.731264 kubelet[2805]: E0113 20:46:26.731143 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:26.808528 containerd[1563]: time="2025-01-13T20:46:26.808477865Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:26.843119 containerd[1563]: time="2025-01-13T20:46:26.843052751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 20:46:26.864940 containerd[1563]: time="2025-01-13T20:46:26.845240006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 866.666189ms" Jan 13 20:46:26.864940 containerd[1563]: time="2025-01-13T20:46:26.845266504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 13 20:46:26.864940 containerd[1563]: time="2025-01-13T20:46:26.845879020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:46:26.864940 containerd[1563]: time="2025-01-13T20:46:26.848828329Z" level=info msg="CreateContainer within sandbox \"346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:46:27.732282 kubelet[2805]: I0113 20:46:27.732235 2805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:27.816847 containerd[1563]: time="2025-01-13T20:46:27.816799066Z" level=info msg="CreateContainer within sandbox \"346c72ae138cde808979a68db083cbb38325a015a054c9575f6d6bf8609c8220\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"48e5fe5d93b916171c7698c5293df3c750c9b57a1fcf08aef2aaadab4328ad0a\"" Jan 13 20:46:27.817287 containerd[1563]: time="2025-01-13T20:46:27.817264261Z" level=info msg="StartContainer for \"48e5fe5d93b916171c7698c5293df3c750c9b57a1fcf08aef2aaadab4328ad0a\"" Jan 13 20:46:27.962521 containerd[1563]: time="2025-01-13T20:46:27.962481726Z" level=info msg="StartContainer for \"48e5fe5d93b916171c7698c5293df3c750c9b57a1fcf08aef2aaadab4328ad0a\" returns successfully" Jan 13 20:46:28.834922 kubelet[2805]: I0113 20:46:28.834870 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5664ddbb6d-jj75b" podStartSLOduration=32.63382435 podStartE2EDuration="36.8341931s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:46:21.777949602 +0000 UTC m=+51.496797539" lastFinishedPulling="2025-01-13 20:46:25.978318342 +0000 UTC m=+55.697166289" observedRunningTime="2025-01-13 20:46:26.871985318 +0000 UTC m=+56.590833265" watchObservedRunningTime="2025-01-13 20:46:28.8341931 +0000 UTC m=+58.553041047" Jan 13 20:46:28.836206 kubelet[2805]: I0113 20:46:28.836052 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5664ddbb6d-trmvh" podStartSLOduration=32.048475187 podStartE2EDuration="36.836018497s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:46:22.058065817 +0000 UTC m=+51.776913764" lastFinishedPulling="2025-01-13 20:46:26.845609137 +0000 UTC m=+56.564457074" observedRunningTime="2025-01-13 20:46:28.834133692 +0000 UTC m=+58.552981649" watchObservedRunningTime="2025-01-13 20:46:28.836018497 +0000 UTC m=+58.554866444" Jan 13 20:46:29.489606 systemd[1]: Started sshd@14-10.0.0.149:22-10.0.0.1:41900.service - OpenSSH per-connection server daemon (10.0.0.1:41900). Jan 13 20:46:29.560698 sshd[5744]: Accepted publickey for core from 10.0.0.1 port 41900 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:29.598599 sshd-session[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:29.602942 systemd-logind[1542]: New session 15 of user core. Jan 13 20:46:29.612707 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:46:29.738497 kubelet[2805]: I0113 20:46:29.738448 2805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:29.825625 sshd[5748]: Connection closed by 10.0.0.1 port 41900 Jan 13 20:46:29.825969 sshd-session[5744]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:29.830766 systemd[1]: sshd@14-10.0.0.149:22-10.0.0.1:41900.service: Deactivated successfully. Jan 13 20:46:29.833960 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:46:29.834779 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:46:29.836046 systemd-logind[1542]: Removed session 15. Jan 13 20:46:30.361279 containerd[1563]: time="2025-01-13T20:46:30.361236434Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\"" Jan 13 20:46:30.361787 containerd[1563]: time="2025-01-13T20:46:30.361367432Z" level=info msg="TearDown network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" successfully" Jan 13 20:46:30.361787 containerd[1563]: time="2025-01-13T20:46:30.361393869Z" level=info msg="StopPodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" returns successfully" Jan 13 20:46:30.367135 containerd[1563]: time="2025-01-13T20:46:30.367103762Z" level=info msg="RemovePodSandbox for \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\"" Jan 13 20:46:30.376063 containerd[1563]: time="2025-01-13T20:46:30.376010316Z" level=info msg="Forcibly stopping sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\"" Jan 13 20:46:30.376220 containerd[1563]: time="2025-01-13T20:46:30.376139020Z" level=info msg="TearDown network for sandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" successfully" Jan 13 20:46:30.382754 containerd[1563]: time="2025-01-13T20:46:30.382719930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:30.418058 containerd[1563]: time="2025-01-13T20:46:30.417973410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:46:30.447102 containerd[1563]: time="2025-01-13T20:46:30.447043318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:30.447102 containerd[1563]: time="2025-01-13T20:46:30.447110089Z" level=info msg="RemovePodSandbox \"236013722548fbfdc27ae93b513cb85dd60e5bfa3ac0da771ddc4b0e8531d9c2\" returns successfully" Jan 13 20:46:30.453553 containerd[1563]: time="2025-01-13T20:46:30.447704997Z" level=info msg="StopPodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\"" Jan 13 20:46:30.453553 containerd[1563]: time="2025-01-13T20:46:30.447866520Z" level=info msg="TearDown network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" successfully" Jan 13 20:46:30.453553 containerd[1563]: time="2025-01-13T20:46:30.447880094Z" level=info msg="StopPodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" returns successfully" Jan 13 20:46:30.453553 containerd[1563]: time="2025-01-13T20:46:30.448106684Z" level=info msg="RemovePodSandbox for \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\"" Jan 13 20:46:30.453553 containerd[1563]: time="2025-01-13T20:46:30.448127663Z" level=info msg="Forcibly stopping sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\"" Jan 13 20:46:30.453553 containerd[1563]: time="2025-01-13T20:46:30.448198791Z" level=info msg="TearDown network for sandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" successfully" Jan 13 20:46:30.453781 containerd[1563]: time="2025-01-13T20:46:30.453597802Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:31.185418 containerd[1563]: time="2025-01-13T20:46:31.185334706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:31.185418 containerd[1563]: time="2025-01-13T20:46:31.185425921Z" level=info msg="RemovePodSandbox \"a3bb36dddc16b8acfcd28d0a04ad3694bf968ae710f5e3b1964efd311f808246\" returns successfully" Jan 13 20:46:31.186312 containerd[1563]: time="2025-01-13T20:46:31.185869938Z" level=info msg="StopPodSandbox for \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\"" Jan 13 20:46:31.186312 containerd[1563]: time="2025-01-13T20:46:31.185991188Z" level=info msg="TearDown network for sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\" successfully" Jan 13 20:46:31.186312 containerd[1563]: time="2025-01-13T20:46:31.186003270Z" level=info msg="StopPodSandbox for \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\" returns successfully" Jan 13 20:46:31.186593 containerd[1563]: time="2025-01-13T20:46:31.186568888Z" level=info msg="RemovePodSandbox for \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\"" Jan 13 20:46:31.186648 containerd[1563]: time="2025-01-13T20:46:31.186596547Z" level=info msg="Forcibly stopping sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\"" Jan 13 20:46:31.186717 containerd[1563]: time="2025-01-13T20:46:31.186675871Z" level=info msg="TearDown network for sandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\" successfully" Jan 13 20:46:31.282715 containerd[1563]: time="2025-01-13T20:46:31.282686165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:31.290995 containerd[1563]: time="2025-01-13T20:46:31.290963221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 4.445056229s" Jan 13 20:46:31.290995 containerd[1563]: time="2025-01-13T20:46:31.290992694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:46:31.291493 containerd[1563]: time="2025-01-13T20:46:31.291475300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:46:31.292675 containerd[1563]: time="2025-01-13T20:46:31.292641198Z" level=info msg="CreateContainer within sandbox \"3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:46:31.377490 containerd[1563]: time="2025-01-13T20:46:31.377441620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:31.377999 containerd[1563]: time="2025-01-13T20:46:31.377523209Z" level=info msg="RemovePodSandbox \"b6f5a8c6cb24524dad6a4412c369076396ec3ed76e9b5e009125e164b8b23f60\" returns successfully" Jan 13 20:46:31.378163 containerd[1563]: time="2025-01-13T20:46:31.378112610Z" level=info msg="StopPodSandbox for \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\"" Jan 13 20:46:31.378327 containerd[1563]: time="2025-01-13T20:46:31.378298267Z" level=info msg="TearDown network for sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\" successfully" Jan 13 20:46:31.378360 containerd[1563]: time="2025-01-13T20:46:31.378319997Z" level=info msg="StopPodSandbox for \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\" returns successfully" Jan 13 20:46:31.378729 containerd[1563]: time="2025-01-13T20:46:31.378687163Z" level=info msg="RemovePodSandbox for \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\"" Jan 13 20:46:31.378729 containerd[1563]: time="2025-01-13T20:46:31.378722067Z" level=info msg="Forcibly stopping sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\"" Jan 13 20:46:31.378867 containerd[1563]: time="2025-01-13T20:46:31.378814544Z" level=info msg="TearDown network for sandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\" successfully" Jan 13 20:46:31.670811 containerd[1563]: time="2025-01-13T20:46:31.670496689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:31.670811 containerd[1563]: time="2025-01-13T20:46:31.670601500Z" level=info msg="RemovePodSandbox \"2ae7eebb744563574d319c34b5b4bc8b67a4011d0ffa583e06e321e5ecfab7e5\" returns successfully" Jan 13 20:46:31.671525 containerd[1563]: time="2025-01-13T20:46:31.671490915Z" level=info msg="StopPodSandbox for \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\"" Jan 13 20:46:31.671682 containerd[1563]: time="2025-01-13T20:46:31.671642621Z" level=info msg="TearDown network for sandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\" successfully" Jan 13 20:46:31.671682 containerd[1563]: time="2025-01-13T20:46:31.671665543Z" level=info msg="StopPodSandbox for \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\" returns successfully" Jan 13 20:46:31.672083 containerd[1563]: time="2025-01-13T20:46:31.672039933Z" level=info msg="RemovePodSandbox for \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\"" Jan 13 20:46:31.672083 containerd[1563]: time="2025-01-13T20:46:31.672081328Z" level=info msg="Forcibly stopping sandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\"" Jan 13 20:46:31.672310 containerd[1563]: time="2025-01-13T20:46:31.672183294Z" level=info msg="TearDown network for sandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\" successfully" Jan 13 20:46:31.923531 containerd[1563]: time="2025-01-13T20:46:31.923315463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:31.923531 containerd[1563]: time="2025-01-13T20:46:31.923471707Z" level=info msg="RemovePodSandbox \"2ac965fda89b040d63fde4b741c9e49bdc584b17ba34d488593e501101e37a31\" returns successfully" Jan 13 20:46:31.924115 containerd[1563]: time="2025-01-13T20:46:31.924052511Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\"" Jan 13 20:46:31.924216 containerd[1563]: time="2025-01-13T20:46:31.924188318Z" level=info msg="TearDown network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" successfully" Jan 13 20:46:31.924216 containerd[1563]: time="2025-01-13T20:46:31.924202334Z" level=info msg="StopPodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" returns successfully" Jan 13 20:46:31.924725 containerd[1563]: time="2025-01-13T20:46:31.924673370Z" level=info msg="RemovePodSandbox for \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\"" Jan 13 20:46:31.924725 containerd[1563]: time="2025-01-13T20:46:31.924718421Z" level=info msg="Forcibly stopping sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\"" Jan 13 20:46:31.924879 containerd[1563]: time="2025-01-13T20:46:31.924825917Z" level=info msg="TearDown network for sandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" successfully" Jan 13 20:46:32.099501 containerd[1563]: time="2025-01-13T20:46:32.099446206Z" level=info msg="CreateContainer within sandbox \"3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d42efbd66968abb6e64103deab54a308539e3444d1bcd00d8947f8bd1b54be31\"" Jan 13 20:46:32.100138 containerd[1563]: time="2025-01-13T20:46:32.100086613Z" level=info msg="StartContainer for \"d42efbd66968abb6e64103deab54a308539e3444d1bcd00d8947f8bd1b54be31\"" Jan 13 20:46:32.303201 containerd[1563]: time="2025-01-13T20:46:32.303119231Z" level=info msg="StartContainer for \"d42efbd66968abb6e64103deab54a308539e3444d1bcd00d8947f8bd1b54be31\" returns successfully" Jan 13 20:46:32.582081 containerd[1563]: time="2025-01-13T20:46:32.581934467Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:32.582081 containerd[1563]: time="2025-01-13T20:46:32.582024982Z" level=info msg="RemovePodSandbox \"08a20de7586964962f5432391c706d7667225019cb3ccd18222142eeba12627b\" returns successfully" Jan 13 20:46:32.582622 containerd[1563]: time="2025-01-13T20:46:32.582548926Z" level=info msg="StopPodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\"" Jan 13 20:46:32.582705 containerd[1563]: time="2025-01-13T20:46:32.582673413Z" level=info msg="TearDown network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" successfully" Jan 13 20:46:32.582705 containerd[1563]: time="2025-01-13T20:46:32.582692447Z" level=info msg="StopPodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" returns successfully" Jan 13 20:46:32.582956 containerd[1563]: time="2025-01-13T20:46:32.582930772Z" level=info msg="RemovePodSandbox for \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\"" Jan 13 20:46:32.583002 containerd[1563]: time="2025-01-13T20:46:32.582957831Z" level=info msg="Forcibly stopping sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\"" Jan 13 20:46:32.583076 containerd[1563]: time="2025-01-13T20:46:32.583035061Z" level=info msg="TearDown network for sandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" successfully" Jan 13 20:46:32.671198 containerd[1563]: time="2025-01-13T20:46:32.671104466Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:32.671396 containerd[1563]: time="2025-01-13T20:46:32.671227009Z" level=info msg="RemovePodSandbox \"136ffdf34f98acf25a77f638496b3a81839bcb3bfa40dbf8ffa5446f43a331bf\" returns successfully" Jan 13 20:46:32.671931 containerd[1563]: time="2025-01-13T20:46:32.671886590Z" level=info msg="StopPodSandbox for \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\"" Jan 13 20:46:32.672101 containerd[1563]: time="2025-01-13T20:46:32.672065076Z" level=info msg="TearDown network for sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\" successfully" Jan 13 20:46:32.672101 containerd[1563]: time="2025-01-13T20:46:32.672089489Z" level=info msg="StopPodSandbox for \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\" returns successfully" Jan 13 20:46:32.672563 containerd[1563]: time="2025-01-13T20:46:32.672517750Z" level=info msg="RemovePodSandbox for \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\"" Jan 13 20:46:32.672563 containerd[1563]: time="2025-01-13T20:46:32.672560067Z" level=info msg="Forcibly stopping sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\"" Jan 13 20:46:32.672800 containerd[1563]: time="2025-01-13T20:46:32.672659557Z" level=info msg="TearDown network for sandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\" successfully" Jan 13 20:46:32.778112 containerd[1563]: time="2025-01-13T20:46:32.778055699Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:32.778112 containerd[1563]: time="2025-01-13T20:46:32.778125455Z" level=info msg="RemovePodSandbox \"77e186fd2a50e052c3ab4b32348aaf23024527803e380cc2aea506f5fcca4bf4\" returns successfully" Jan 13 20:46:32.778578 containerd[1563]: time="2025-01-13T20:46:32.778539759Z" level=info msg="StopPodSandbox for \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\"" Jan 13 20:46:32.778692 containerd[1563]: time="2025-01-13T20:46:32.778660951Z" level=info msg="TearDown network for sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\" successfully" Jan 13 20:46:32.778692 containerd[1563]: time="2025-01-13T20:46:32.778682470Z" level=info msg="StopPodSandbox for \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\" returns successfully" Jan 13 20:46:32.778983 containerd[1563]: time="2025-01-13T20:46:32.778949847Z" level=info msg="RemovePodSandbox for \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\"" Jan 13 20:46:32.778983 containerd[1563]: time="2025-01-13T20:46:32.778979692Z" level=info msg="Forcibly stopping sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\"" Jan 13 20:46:32.779129 containerd[1563]: time="2025-01-13T20:46:32.779066980Z" level=info msg="TearDown network for sandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\" successfully" Jan 13 20:46:32.862901 containerd[1563]: time="2025-01-13T20:46:32.862726112Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:32.862901 containerd[1563]: time="2025-01-13T20:46:32.862811017Z" level=info msg="RemovePodSandbox \"0477bd1d4063ed14d9101c0f0488c742a54bd08b8c80bac5c66e223c49e15bff\" returns successfully" Jan 13 20:46:32.863360 containerd[1563]: time="2025-01-13T20:46:32.863331485Z" level=info msg="StopPodSandbox for \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\"" Jan 13 20:46:32.863556 containerd[1563]: time="2025-01-13T20:46:32.863459879Z" level=info msg="TearDown network for sandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\" successfully" Jan 13 20:46:32.863556 containerd[1563]: time="2025-01-13T20:46:32.863470738Z" level=info msg="StopPodSandbox for \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\" returns successfully" Jan 13 20:46:32.864434 containerd[1563]: time="2025-01-13T20:46:32.864371579Z" level=info msg="RemovePodSandbox for \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\"" Jan 13 20:46:32.864500 containerd[1563]: time="2025-01-13T20:46:32.864445504Z" level=info msg="Forcibly stopping sandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\"" Jan 13 20:46:32.864634 containerd[1563]: time="2025-01-13T20:46:32.864569420Z" level=info msg="TearDown network for sandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\" successfully" Jan 13 20:46:33.123570 containerd[1563]: time="2025-01-13T20:46:33.123176132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.123570 containerd[1563]: time="2025-01-13T20:46:33.123296541Z" level=info msg="RemovePodSandbox \"cb9c6220f5b41b6479c33d58013e6191f647ac92d9e9c239add284e0ccbb2954\" returns successfully" Jan 13 20:46:33.123883 containerd[1563]: time="2025-01-13T20:46:33.123819747Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\"" Jan 13 20:46:33.124051 containerd[1563]: time="2025-01-13T20:46:33.123983195Z" level=info msg="TearDown network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" successfully" Jan 13 20:46:33.124051 containerd[1563]: time="2025-01-13T20:46:33.123997662Z" level=info msg="StopPodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" returns successfully" Jan 13 20:46:33.124367 containerd[1563]: time="2025-01-13T20:46:33.124336511Z" level=info msg="RemovePodSandbox for \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\"" Jan 13 20:46:33.124429 containerd[1563]: time="2025-01-13T20:46:33.124370833Z" level=info msg="Forcibly stopping sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\"" Jan 13 20:46:33.124540 containerd[1563]: time="2025-01-13T20:46:33.124492375Z" level=info msg="TearDown network for sandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" successfully" Jan 13 20:46:33.538638 containerd[1563]: time="2025-01-13T20:46:33.538579574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.538817 containerd[1563]: time="2025-01-13T20:46:33.538672373Z" level=info msg="RemovePodSandbox \"03062f7247ea2f66e083e6185a7fa4d0c2ed488227e37ec62cdf55668ebed0b6\" returns successfully" Jan 13 20:46:33.539217 containerd[1563]: time="2025-01-13T20:46:33.539181934Z" level=info msg="StopPodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\"" Jan 13 20:46:33.539433 containerd[1563]: time="2025-01-13T20:46:33.539314275Z" level=info msg="TearDown network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" successfully" Jan 13 20:46:33.539433 containerd[1563]: time="2025-01-13T20:46:33.539328831Z" level=info msg="StopPodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" returns successfully" Jan 13 20:46:33.539615 containerd[1563]: time="2025-01-13T20:46:33.539592933Z" level=info msg="RemovePodSandbox for \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\"" Jan 13 20:46:33.542395 containerd[1563]: time="2025-01-13T20:46:33.539941961Z" level=info msg="Forcibly stopping sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\"" Jan 13 20:46:33.542395 containerd[1563]: time="2025-01-13T20:46:33.540088399Z" level=info msg="TearDown network for sandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" successfully" Jan 13 20:46:33.627970 containerd[1563]: time="2025-01-13T20:46:33.627906044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:33.628488 containerd[1563]: time="2025-01-13T20:46:33.627995888Z" level=info msg="RemovePodSandbox \"62d29f199bed8d3c7a7143b34ff9e4ee23c2802fb81c04cef849a0a78ff6748e\" returns successfully" Jan 13 20:46:33.628541 containerd[1563]: time="2025-01-13T20:46:33.628507743Z" level=info msg="StopPodSandbox for \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\"" Jan 13 20:46:33.628636 containerd[1563]: time="2025-01-13T20:46:33.628617884Z" level=info msg="TearDown network for sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\" successfully" Jan 13 20:46:33.628674 containerd[1563]: time="2025-01-13T20:46:33.628635787Z" level=info msg="StopPodSandbox for \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\" returns successfully" Jan 13 20:46:33.628885 containerd[1563]: time="2025-01-13T20:46:33.628860718Z" level=info msg="RemovePodSandbox for \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\"" Jan 13 20:46:33.628957 containerd[1563]: time="2025-01-13T20:46:33.628886905Z" level=info msg="Forcibly stopping sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\"" Jan 13 20:46:33.629029 containerd[1563]: time="2025-01-13T20:46:33.628975297Z" level=info msg="TearDown network for sandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\" successfully" Jan 13 20:46:34.165777 containerd[1563]: time="2025-01-13T20:46:34.165718997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:34.165950 containerd[1563]: time="2025-01-13T20:46:34.165801618Z" level=info msg="RemovePodSandbox \"4a24692500e30c27e6b5c241a77cece0e57236dd888a9e59c750b11a9e003a63\" returns successfully" Jan 13 20:46:34.166531 containerd[1563]: time="2025-01-13T20:46:34.166319236Z" level=info msg="StopPodSandbox for \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\"" Jan 13 20:46:34.166531 containerd[1563]: time="2025-01-13T20:46:34.166464663Z" level=info msg="TearDown network for sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\" successfully" Jan 13 20:46:34.166531 containerd[1563]: time="2025-01-13T20:46:34.166475542Z" level=info msg="StopPodSandbox for \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\" returns successfully" Jan 13 20:46:34.166997 containerd[1563]: time="2025-01-13T20:46:34.166960500Z" level=info msg="RemovePodSandbox for \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\"" Jan 13 20:46:34.166997 containerd[1563]: time="2025-01-13T20:46:34.166990094Z" level=info msg="Forcibly stopping sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\"" Jan 13 20:46:34.167111 containerd[1563]: time="2025-01-13T20:46:34.167086780Z" level=info msg="TearDown network for sandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\" successfully" Jan 13 20:46:34.400162 containerd[1563]: time="2025-01-13T20:46:34.400079462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:34.400162 containerd[1563]: time="2025-01-13T20:46:34.400147858Z" level=info msg="RemovePodSandbox \"cf6044a73a4a11e7cba11e9f900528dfdc07c46bd3452f873ebaa75aef7b0f39\" returns successfully" Jan 13 20:46:34.400506 containerd[1563]: time="2025-01-13T20:46:34.400474526Z" level=info msg="StopPodSandbox for \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\"" Jan 13 20:46:34.400607 containerd[1563]: time="2025-01-13T20:46:34.400587393Z" level=info msg="TearDown network for sandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\" successfully" Jan 13 20:46:34.400607 containerd[1563]: time="2025-01-13T20:46:34.400604494Z" level=info msg="StopPodSandbox for \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\" returns successfully" Jan 13 20:46:34.400895 containerd[1563]: time="2025-01-13T20:46:34.400872435Z" level=info msg="RemovePodSandbox for \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\"" Jan 13 20:46:34.400948 containerd[1563]: time="2025-01-13T20:46:34.400901838Z" level=info msg="Forcibly stopping sandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\"" Jan 13 20:46:34.401053 containerd[1563]: time="2025-01-13T20:46:34.400988927Z" level=info msg="TearDown network for sandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\" successfully" Jan 13 20:46:34.656419 containerd[1563]: time="2025-01-13T20:46:34.656315340Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:34.656886 containerd[1563]: time="2025-01-13T20:46:34.656448283Z" level=info msg="RemovePodSandbox \"fcccad5796a3f95d91d76d619b057d5870548397362400a48acccd8e92c373e8\" returns successfully" Jan 13 20:46:34.656968 containerd[1563]: time="2025-01-13T20:46:34.656938190Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\"" Jan 13 20:46:34.657112 containerd[1563]: time="2025-01-13T20:46:34.657085609Z" level=info msg="TearDown network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" successfully" Jan 13 20:46:34.657112 containerd[1563]: time="2025-01-13T20:46:34.657104824Z" level=info msg="StopPodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" returns successfully" Jan 13 20:46:34.657557 containerd[1563]: time="2025-01-13T20:46:34.657523921Z" level=info msg="RemovePodSandbox for \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\"" Jan 13 20:46:34.657557 containerd[1563]: time="2025-01-13T20:46:34.657547665Z" level=info msg="Forcibly stopping sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\"" Jan 13 20:46:34.657720 containerd[1563]: time="2025-01-13T20:46:34.657622382Z" level=info msg="TearDown network for sandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" successfully" Jan 13 20:46:34.844722 systemd[1]: Started sshd@15-10.0.0.149:22-10.0.0.1:58014.service - OpenSSH per-connection server daemon (10.0.0.1:58014). Jan 13 20:46:34.926263 containerd[1563]: time="2025-01-13T20:46:34.926168390Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:34.926443 containerd[1563]: time="2025-01-13T20:46:34.926273884Z" level=info msg="RemovePodSandbox \"76b22bc80bdc643ada0654ec5205f3dad503daa46c30555200ad28c37a273dbd\" returns successfully" Jan 13 20:46:34.926928 containerd[1563]: time="2025-01-13T20:46:34.926869745Z" level=info msg="StopPodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\"" Jan 13 20:46:34.927131 containerd[1563]: time="2025-01-13T20:46:34.927021222Z" level=info msg="TearDown network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" successfully" Jan 13 20:46:34.927131 containerd[1563]: time="2025-01-13T20:46:34.927038032Z" level=info msg="StopPodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" returns successfully" Jan 13 20:46:34.927293 containerd[1563]: time="2025-01-13T20:46:34.927267002Z" level=info msg="RemovePodSandbox for \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\"" Jan 13 20:46:34.927341 containerd[1563]: time="2025-01-13T20:46:34.927297358Z" level=info msg="Forcibly stopping sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\"" Jan 13 20:46:34.927454 containerd[1563]: time="2025-01-13T20:46:34.927406648Z" level=info msg="TearDown network for sandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" successfully" Jan 13 20:46:35.026819 sshd[5801]: Accepted publickey for core from 10.0.0.1 port 58014 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:35.029152 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:35.034041 systemd-logind[1542]: New session 16 of user core. Jan 13 20:46:35.045780 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:46:35.232870 sshd[5804]: Connection closed by 10.0.0.1 port 58014 Jan 13 20:46:35.233143 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:35.236970 systemd[1]: sshd@15-10.0.0.149:22-10.0.0.1:58014.service: Deactivated successfully. Jan 13 20:46:35.239660 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:46:35.239713 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:46:35.240815 systemd-logind[1542]: Removed session 16. Jan 13 20:46:35.359565 containerd[1563]: time="2025-01-13T20:46:35.359403172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:35.359565 containerd[1563]: time="2025-01-13T20:46:35.359535034Z" level=info msg="RemovePodSandbox \"607ab5142e9abfc453c9a1dcdc4f90d6fea8baf986d9b263f9f9ca7751d4759c\" returns successfully" Jan 13 20:46:35.360180 containerd[1563]: time="2025-01-13T20:46:35.360072410Z" level=info msg="StopPodSandbox for \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\"" Jan 13 20:46:35.360249 containerd[1563]: time="2025-01-13T20:46:35.360205735Z" level=info msg="TearDown network for sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\" successfully" Jan 13 20:46:35.360249 containerd[1563]: time="2025-01-13T20:46:35.360218007Z" level=info msg="StopPodSandbox for \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\" returns successfully" Jan 13 20:46:35.360739 containerd[1563]: time="2025-01-13T20:46:35.360670718Z" level=info msg="RemovePodSandbox for \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\"" Jan 13 20:46:35.360739 containerd[1563]: time="2025-01-13T20:46:35.360696696Z" level=info msg="Forcibly stopping sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\"" Jan 13 20:46:35.360861 containerd[1563]: time="2025-01-13T20:46:35.360772655Z" level=info msg="TearDown network for sandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\" successfully" Jan 13 20:46:37.030634 containerd[1563]: time="2025-01-13T20:46:37.030571235Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:37.031269 containerd[1563]: time="2025-01-13T20:46:37.030652926Z" level=info msg="RemovePodSandbox \"e6860579c3e1480747dd556bf890f9b786a5ab42f0b64a9c49b0dd70d8fc7084\" returns successfully" Jan 13 20:46:37.031269 containerd[1563]: time="2025-01-13T20:46:37.031157135Z" level=info msg="StopPodSandbox for \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\"" Jan 13 20:46:37.031349 containerd[1563]: time="2025-01-13T20:46:37.031268731Z" level=info msg="TearDown network for sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\" successfully" Jan 13 20:46:37.031349 containerd[1563]: time="2025-01-13T20:46:37.031281904Z" level=info msg="StopPodSandbox for \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\" returns successfully" Jan 13 20:46:37.032622 containerd[1563]: time="2025-01-13T20:46:37.031632411Z" level=info msg="RemovePodSandbox for \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\"" Jan 13 20:46:37.032622 containerd[1563]: time="2025-01-13T20:46:37.031660933Z" level=info msg="Forcibly stopping sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\"" Jan 13 20:46:37.032622 containerd[1563]: time="2025-01-13T20:46:37.031747442Z" level=info msg="TearDown network for sandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\" successfully" Jan 13 20:46:37.362147 containerd[1563]: time="2025-01-13T20:46:37.362032882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:37.362507 containerd[1563]: time="2025-01-13T20:46:37.362368391Z" level=info msg="RemovePodSandbox \"6810cabbf0169a1078e4d75283af2f4af9ef6e567522f6f5ffdd0a5c639a44e0\" returns successfully" Jan 13 20:46:37.362963 containerd[1563]: time="2025-01-13T20:46:37.362935146Z" level=info msg="StopPodSandbox for \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\"" Jan 13 20:46:37.363085 containerd[1563]: time="2025-01-13T20:46:37.363036112Z" level=info msg="TearDown network for sandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\" successfully" Jan 13 20:46:37.363085 containerd[1563]: time="2025-01-13T20:46:37.363046631Z" level=info msg="StopPodSandbox for \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\" returns successfully" Jan 13 20:46:37.365207 containerd[1563]: time="2025-01-13T20:46:37.363357884Z" level=info msg="RemovePodSandbox for \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\"" Jan 13 20:46:37.365207 containerd[1563]: time="2025-01-13T20:46:37.363428274Z" level=info msg="Forcibly stopping sandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\"" Jan 13 20:46:37.365207 containerd[1563]: time="2025-01-13T20:46:37.363515876Z" level=info msg="TearDown network for sandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\" successfully" Jan 13 20:46:37.639171 containerd[1563]: time="2025-01-13T20:46:37.639003596Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:37.639171 containerd[1563]: time="2025-01-13T20:46:37.639089795Z" level=info msg="RemovePodSandbox \"b435372828617d3628cfb6e316b6a4749d81b223ff185af1e969759091cae797\" returns successfully" Jan 13 20:46:37.639967 containerd[1563]: time="2025-01-13T20:46:37.639941906Z" level=info msg="StopPodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\"" Jan 13 20:46:37.640069 containerd[1563]: time="2025-01-13T20:46:37.640048522Z" level=info msg="TearDown network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" successfully" Jan 13 20:46:37.640069 containerd[1563]: time="2025-01-13T20:46:37.640066305Z" level=info msg="StopPodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" returns successfully" Jan 13 20:46:37.640467 containerd[1563]: time="2025-01-13T20:46:37.640424355Z" level=info msg="RemovePodSandbox for \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\"" Jan 13 20:46:37.640467 containerd[1563]: time="2025-01-13T20:46:37.640446155Z" level=info msg="Forcibly stopping sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\"" Jan 13 20:46:37.640553 containerd[1563]: time="2025-01-13T20:46:37.640509902Z" level=info msg="TearDown network for sandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" successfully" Jan 13 20:46:37.821469 containerd[1563]: time="2025-01-13T20:46:37.821420193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:37.821642 containerd[1563]: time="2025-01-13T20:46:37.821481987Z" level=info msg="RemovePodSandbox \"8c94e9afd9a5eb7bc548b40c5f6403033ad5abcfd0d0f0bac92ea9bd665d1e2d\" returns successfully" Jan 13 20:46:37.821943 containerd[1563]: time="2025-01-13T20:46:37.821910066Z" level=info msg="StopPodSandbox for \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\"" Jan 13 20:46:37.822089 containerd[1563]: time="2025-01-13T20:46:37.822064070Z" level=info msg="TearDown network for sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\" successfully" Jan 13 20:46:37.822089 containerd[1563]: time="2025-01-13T20:46:37.822082004Z" level=info msg="StopPodSandbox for \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\" returns successfully" Jan 13 20:46:37.822453 containerd[1563]: time="2025-01-13T20:46:37.822419055Z" level=info msg="RemovePodSandbox for \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\"" Jan 13 20:46:37.822498 containerd[1563]: time="2025-01-13T20:46:37.822452667Z" level=info msg="Forcibly stopping sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\"" Jan 13 20:46:37.822603 containerd[1563]: time="2025-01-13T20:46:37.822537553Z" level=info msg="TearDown network for sandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\" successfully" Jan 13 20:46:37.865973 containerd[1563]: time="2025-01-13T20:46:37.865899034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:37.942590 containerd[1563]: time="2025-01-13T20:46:37.942514016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 20:46:37.981567 containerd[1563]: time="2025-01-13T20:46:37.981508037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:37.981662 containerd[1563]: time="2025-01-13T20:46:37.981599365Z" level=info msg="RemovePodSandbox \"4cd0fec2bfc0ff46ce2162bcf9c413802267d5d70740ad1ded456b4f6fafbcd6\" returns successfully" Jan 13 20:46:37.982107 containerd[1563]: time="2025-01-13T20:46:37.982079810Z" level=info msg="StopPodSandbox for \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\"" Jan 13 20:46:37.982261 containerd[1563]: time="2025-01-13T20:46:37.982197968Z" level=info msg="TearDown network for sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\" successfully" Jan 13 20:46:37.982261 containerd[1563]: time="2025-01-13T20:46:37.982252499Z" level=info msg="StopPodSandbox for \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\" returns successfully" Jan 13 20:46:37.982477 containerd[1563]: time="2025-01-13T20:46:37.982457968Z" level=info msg="RemovePodSandbox for \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\"" Jan 13 20:46:37.982515 containerd[1563]: time="2025-01-13T20:46:37.982481210Z" level=info msg="Forcibly stopping sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\"" Jan 13 20:46:37.982570 containerd[1563]: time="2025-01-13T20:46:37.982542152Z" level=info msg="TearDown network for sandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\" successfully" Jan 13 20:46:38.014784 containerd[1563]: time="2025-01-13T20:46:38.014741906Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:38.087520 containerd[1563]: time="2025-01-13T20:46:38.087457710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:38.087947 containerd[1563]: time="2025-01-13T20:46:38.087559629Z" level=info msg="RemovePodSandbox \"b3acd73dbbd1ad8b5ae9837060550cd750d587c9bd141b8524d10501474768db\" returns successfully" Jan 13 20:46:38.088073 containerd[1563]: time="2025-01-13T20:46:38.088032663Z" level=info msg="StopPodSandbox for \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\"" Jan 13 20:46:38.088263 containerd[1563]: time="2025-01-13T20:46:38.088236028Z" level=info msg="TearDown network for sandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\" successfully" Jan 13 20:46:38.088263 containerd[1563]: time="2025-01-13T20:46:38.088253100Z" level=info msg="StopPodSandbox for \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\" returns successfully" Jan 13 20:46:38.088670 containerd[1563]: time="2025-01-13T20:46:38.088633242Z" level=info msg="RemovePodSandbox for \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\"" Jan 13 20:46:38.088670 containerd[1563]: time="2025-01-13T20:46:38.088658308Z" level=info msg="Forcibly stopping sandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\"" Jan 13 20:46:38.088785 containerd[1563]: time="2025-01-13T20:46:38.088731683Z" level=info msg="TearDown network for sandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\" successfully" Jan 13 20:46:38.157511 containerd[1563]: time="2025-01-13T20:46:38.157460203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:38.158330 containerd[1563]: time="2025-01-13T20:46:38.158295205Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 6.866663851s" Jan 13 20:46:38.158330 containerd[1563]: time="2025-01-13T20:46:38.158322636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 20:46:38.158863 containerd[1563]: time="2025-01-13T20:46:38.158827889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:46:38.166156 containerd[1563]: time="2025-01-13T20:46:38.166116602Z" level=info msg="CreateContainer within sandbox \"358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:46:38.398309 containerd[1563]: time="2025-01-13T20:46:38.398175741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:38.398309 containerd[1563]: time="2025-01-13T20:46:38.398248505Z" level=info msg="RemovePodSandbox \"9f7d35e2c8f4dc06a81e4f8126194da5fc1d9231f303b8a85e8d9d25ecaafa3d\" returns successfully" Jan 13 20:46:38.398634 containerd[1563]: time="2025-01-13T20:46:38.398608450Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\"" Jan 13 20:46:38.398780 containerd[1563]: time="2025-01-13T20:46:38.398711320Z" level=info msg="TearDown network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" successfully" Jan 13 20:46:38.398780 containerd[1563]: time="2025-01-13T20:46:38.398771121Z" level=info msg="StopPodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" returns successfully" Jan 13 20:46:38.399430 containerd[1563]: time="2025-01-13T20:46:38.399283878Z" level=info msg="RemovePodSandbox for \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\"" Jan 13 20:46:38.399430 containerd[1563]: time="2025-01-13T20:46:38.399428675Z" level=info msg="Forcibly stopping sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\"" Jan 13 20:46:38.399604 containerd[1563]: time="2025-01-13T20:46:38.399520184Z" level=info msg="TearDown network for sandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" successfully" Jan 13 20:46:38.627035 containerd[1563]: time="2025-01-13T20:46:38.626949781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:38.627164 containerd[1563]: time="2025-01-13T20:46:38.627041120Z" level=info msg="RemovePodSandbox \"b8ca89599c952e54f00b2cce8d4c554fc011a1124f5becd5cf5769a91e04bcd6\" returns successfully" Jan 13 20:46:38.627605 containerd[1563]: time="2025-01-13T20:46:38.627575027Z" level=info msg="StopPodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\"" Jan 13 20:46:38.627715 containerd[1563]: time="2025-01-13T20:46:38.627698394Z" level=info msg="TearDown network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" successfully" Jan 13 20:46:38.627715 containerd[1563]: time="2025-01-13T20:46:38.627710717Z" level=info msg="StopPodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" returns successfully" Jan 13 20:46:38.628079 containerd[1563]: time="2025-01-13T20:46:38.628056115Z" level=info msg="RemovePodSandbox for \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\"" Jan 13 20:46:38.628118 containerd[1563]: time="2025-01-13T20:46:38.628078827Z" level=info msg="Forcibly stopping sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\"" Jan 13 20:46:38.628181 containerd[1563]: time="2025-01-13T20:46:38.628141112Z" level=info msg="TearDown network for sandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" successfully" Jan 13 20:46:39.128332 containerd[1563]: time="2025-01-13T20:46:39.128274725Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:39.128803 containerd[1563]: time="2025-01-13T20:46:39.128358310Z" level=info msg="RemovePodSandbox \"bc7a9c92177bd23110d93e97f0ed4bff52259afe9ab0caa9558665f9588be8a5\" returns successfully" Jan 13 20:46:39.128909 containerd[1563]: time="2025-01-13T20:46:39.128875037Z" level=info msg="StopPodSandbox for \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\"" Jan 13 20:46:39.129076 containerd[1563]: time="2025-01-13T20:46:39.128987104Z" level=info msg="TearDown network for sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\" successfully" Jan 13 20:46:39.129076 containerd[1563]: time="2025-01-13T20:46:39.128999748Z" level=info msg="StopPodSandbox for \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\" returns successfully" Jan 13 20:46:39.129324 containerd[1563]: time="2025-01-13T20:46:39.129298760Z" level=info msg="RemovePodSandbox for \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\"" Jan 13 20:46:39.129324 containerd[1563]: time="2025-01-13T20:46:39.129322084Z" level=info msg="Forcibly stopping sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\"" Jan 13 20:46:39.129483 containerd[1563]: time="2025-01-13T20:46:39.129432669Z" level=info msg="TearDown network for sandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\" successfully" Jan 13 20:46:39.363689 containerd[1563]: time="2025-01-13T20:46:39.363643554Z" level=info msg="CreateContainer within sandbox \"358d2d171bbccb2f652a85a75d5379ff685f27be346870f0afc4d3f9be2094ad\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d9221ed7e1e0e63be47d9d965ec95dfbb0fc21762d05fa1f6ce2f87b3cdf3ffd\"" Jan 13 20:46:39.364233 containerd[1563]: time="2025-01-13T20:46:39.364167665Z" level=info msg="StartContainer for \"d9221ed7e1e0e63be47d9d965ec95dfbb0fc21762d05fa1f6ce2f87b3cdf3ffd\"" Jan 13 20:46:39.764812 containerd[1563]: time="2025-01-13T20:46:39.764757172Z" level=info msg="StartContainer for \"d9221ed7e1e0e63be47d9d965ec95dfbb0fc21762d05fa1f6ce2f87b3cdf3ffd\" returns successfully" Jan 13 20:46:40.248698 systemd[1]: Started sshd@16-10.0.0.149:22-10.0.0.1:58028.service - OpenSSH per-connection server daemon (10.0.0.1:58028). Jan 13 20:46:40.382598 kubelet[2805]: E0113 20:46:40.382566 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:40.408203 sshd[5883]: Accepted publickey for core from 10.0.0.1 port 58028 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:40.410057 sshd-session[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:40.414197 systemd-logind[1542]: New session 17 of user core. Jan 13 20:46:40.424653 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:46:40.431452 kubelet[2805]: I0113 20:46:40.429135 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-dfc47ddd6-rhff2" podStartSLOduration=32.874271052 podStartE2EDuration="48.429085377s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:46:22.603835344 +0000 UTC m=+52.322683291" lastFinishedPulling="2025-01-13 20:46:38.158649669 +0000 UTC m=+67.877497616" observedRunningTime="2025-01-13 20:46:40.276866967 +0000 UTC m=+69.995714924" watchObservedRunningTime="2025-01-13 20:46:40.429085377 +0000 UTC m=+70.147933334" Jan 13 20:46:40.759083 containerd[1563]: time="2025-01-13T20:46:40.759019646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:40.818489 containerd[1563]: time="2025-01-13T20:46:40.759102870Z" level=info msg="RemovePodSandbox \"625131f8040e74c410d7a0f8e56d3386f29ad176e26a1f81970aea8b0109518e\" returns successfully" Jan 13 20:46:40.818489 containerd[1563]: time="2025-01-13T20:46:40.759610843Z" level=info msg="StopPodSandbox for \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\"" Jan 13 20:46:40.818489 containerd[1563]: time="2025-01-13T20:46:40.759731596Z" level=info msg="TearDown network for sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\" successfully" Jan 13 20:46:40.818489 containerd[1563]: time="2025-01-13T20:46:40.759744741Z" level=info msg="StopPodSandbox for \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\" returns successfully" Jan 13 20:46:40.818489 containerd[1563]: time="2025-01-13T20:46:40.759980548Z" level=info msg="RemovePodSandbox for \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\"" Jan 13 20:46:40.818489 containerd[1563]: time="2025-01-13T20:46:40.760002730Z" level=info msg="Forcibly stopping sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\"" Jan 13 20:46:40.818489 containerd[1563]: time="2025-01-13T20:46:40.760098397Z" level=info msg="TearDown network for sandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\" successfully" Jan 13 20:46:40.875359 sshd[5886]: Connection closed by 10.0.0.1 port 58028 Jan 13 20:46:40.875727 sshd-session[5883]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:40.879130 systemd[1]: sshd@16-10.0.0.149:22-10.0.0.1:58028.service: Deactivated successfully. Jan 13 20:46:40.881291 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:46:40.881393 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:46:40.882364 systemd-logind[1542]: Removed session 17. Jan 13 20:46:41.418362 containerd[1563]: time="2025-01-13T20:46:41.418289494Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:41.418562 containerd[1563]: time="2025-01-13T20:46:41.418417913Z" level=info msg="RemovePodSandbox \"3e41efa58f020ae5f8d789bef1a4010a9b389b6ec8fba88939ba3ae7dd54a479\" returns successfully" Jan 13 20:46:41.419036 containerd[1563]: time="2025-01-13T20:46:41.418997460Z" level=info msg="StopPodSandbox for \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\"" Jan 13 20:46:41.419176 containerd[1563]: time="2025-01-13T20:46:41.419139173Z" level=info msg="TearDown network for sandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\" successfully" Jan 13 20:46:41.419176 containerd[1563]: time="2025-01-13T20:46:41.419154772Z" level=info msg="StopPodSandbox for \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\" returns successfully" Jan 13 20:46:41.419603 containerd[1563]: time="2025-01-13T20:46:41.419562920Z" level=info msg="RemovePodSandbox for \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\"" Jan 13 20:46:41.419677 containerd[1563]: time="2025-01-13T20:46:41.419612192Z" level=info msg="Forcibly stopping sandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\"" Jan 13 20:46:41.419764 containerd[1563]: time="2025-01-13T20:46:41.419717888Z" level=info msg="TearDown network for sandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\" successfully" Jan 13 20:46:42.180668 containerd[1563]: time="2025-01-13T20:46:42.180578629Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:46:42.180668 containerd[1563]: time="2025-01-13T20:46:42.180672873Z" level=info msg="RemovePodSandbox \"caf9370ad8c7c3ee841dc5c6936d37d70447da60f45df554c0fcf85e7e9795b4\" returns successfully" Jan 13 20:46:45.893610 systemd[1]: Started sshd@17-10.0.0.149:22-10.0.0.1:51678.service - OpenSSH per-connection server daemon (10.0.0.1:51678). Jan 13 20:46:45.943517 sshd[5907]: Accepted publickey for core from 10.0.0.1 port 51678 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:45.945600 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:45.950013 systemd-logind[1542]: New session 18 of user core. Jan 13 20:46:45.957680 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:46:46.172529 sshd[5910]: Connection closed by 10.0.0.1 port 51678 Jan 13 20:46:46.172987 sshd-session[5907]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:46.179052 systemd[1]: sshd@17-10.0.0.149:22-10.0.0.1:51678.service: Deactivated successfully. Jan 13 20:46:46.181438 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:46:46.181460 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:46:46.182426 systemd-logind[1542]: Removed session 18. Jan 13 20:46:46.382041 kubelet[2805]: E0113 20:46:46.382004 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:46.463729 kubelet[2805]: E0113 20:46:46.463698 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:46:47.312828 containerd[1563]: time="2025-01-13T20:46:47.312776647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:47.396803 containerd[1563]: time="2025-01-13T20:46:47.396683235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:46:47.514477 containerd[1563]: time="2025-01-13T20:46:47.514356488Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:47.646168 containerd[1563]: time="2025-01-13T20:46:47.646016023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:46:47.646891 containerd[1563]: time="2025-01-13T20:46:47.646866069Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 9.488007753s" Jan 13 20:46:47.646930 containerd[1563]: time="2025-01-13T20:46:47.646894712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:46:47.648818 containerd[1563]: time="2025-01-13T20:46:47.648779733Z" level=info msg="CreateContainer within sandbox \"3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:46:48.237564 containerd[1563]: time="2025-01-13T20:46:48.237340776Z" level=info msg="CreateContainer within sandbox \"3f45257b6310eea17656a1247fe49b1f6f87b37f681813aea7b085b901821d9d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"55c14bc18e7e6625ec34984f2b2d1ab4127a1ab7f3bb16592a60fb8489434322\"" Jan 13 20:46:48.239440 containerd[1563]: time="2025-01-13T20:46:48.238308255Z" level=info msg="StartContainer for \"55c14bc18e7e6625ec34984f2b2d1ab4127a1ab7f3bb16592a60fb8489434322\"" Jan 13 20:46:48.445617 containerd[1563]: time="2025-01-13T20:46:48.445552842Z" level=info msg="StartContainer for \"55c14bc18e7e6625ec34984f2b2d1ab4127a1ab7f3bb16592a60fb8489434322\" returns successfully" Jan 13 20:46:48.495158 kubelet[2805]: I0113 20:46:48.495055 2805 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:46:48.498826 kubelet[2805]: I0113 20:46:48.498799 2805 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:46:49.004963 kubelet[2805]: I0113 20:46:49.004914 2805 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-2wkvn" podStartSLOduration=31.670820382 podStartE2EDuration="57.004860479s" podCreationTimestamp="2025-01-13 20:45:52 +0000 UTC" firstStartedPulling="2025-01-13 20:46:22.313199532 +0000 UTC m=+52.032047480" lastFinishedPulling="2025-01-13 20:46:47.64723962 +0000 UTC m=+77.366087577" observedRunningTime="2025-01-13 20:46:49.004526981 +0000 UTC m=+78.723374948" watchObservedRunningTime="2025-01-13 20:46:49.004860479 +0000 UTC m=+78.723708426" Jan 13 20:46:50.117874 kubelet[2805]: I0113 20:46:50.117831 2805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:51.188611 systemd[1]: Started sshd@18-10.0.0.149:22-10.0.0.1:51682.service - OpenSSH per-connection server daemon (10.0.0.1:51682). Jan 13 20:46:51.224169 sshd[5986]: Accepted publickey for core from 10.0.0.1 port 51682 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:51.225956 sshd-session[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:51.230136 systemd-logind[1542]: New session 19 of user core. Jan 13 20:46:51.240629 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:46:51.396715 sshd[5989]: Connection closed by 10.0.0.1 port 51682 Jan 13 20:46:51.397054 sshd-session[5986]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:51.403167 systemd[1]: sshd@18-10.0.0.149:22-10.0.0.1:51682.service: Deactivated successfully. Jan 13 20:46:51.405806 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:46:51.406300 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:46:51.407700 systemd-logind[1542]: Removed session 19. Jan 13 20:46:51.841936 kubelet[2805]: I0113 20:46:51.841749 2805 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:46:56.413589 systemd[1]: Started sshd@19-10.0.0.149:22-10.0.0.1:42090.service - OpenSSH per-connection server daemon (10.0.0.1:42090). Jan 13 20:46:56.444958 sshd[6004]: Accepted publickey for core from 10.0.0.1 port 42090 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:56.446498 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:56.450262 systemd-logind[1542]: New session 20 of user core. Jan 13 20:46:56.457636 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:46:56.577942 sshd[6007]: Connection closed by 10.0.0.1 port 42090 Jan 13 20:46:56.578403 sshd-session[6004]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:56.584751 systemd[1]: Started sshd@20-10.0.0.149:22-10.0.0.1:42096.service - OpenSSH per-connection server daemon (10.0.0.1:42096). Jan 13 20:46:56.585747 systemd[1]: sshd@19-10.0.0.149:22-10.0.0.1:42090.service: Deactivated successfully. Jan 13 20:46:56.592904 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:46:56.596404 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:46:56.597570 systemd-logind[1542]: Removed session 20. Jan 13 20:46:56.628324 sshd[6016]: Accepted publickey for core from 10.0.0.1 port 42096 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:56.630110 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:56.634858 systemd-logind[1542]: New session 21 of user core. Jan 13 20:46:56.643949 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:46:56.839282 sshd[6022]: Connection closed by 10.0.0.1 port 42096 Jan 13 20:46:56.839764 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:56.847651 systemd[1]: Started sshd@21-10.0.0.149:22-10.0.0.1:42100.service - OpenSSH per-connection server daemon (10.0.0.1:42100). Jan 13 20:46:56.848138 systemd[1]: sshd@20-10.0.0.149:22-10.0.0.1:42096.service: Deactivated successfully. Jan 13 20:46:56.851238 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:46:56.852178 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:46:56.853546 systemd-logind[1542]: Removed session 21. Jan 13 20:46:56.879511 sshd[6029]: Accepted publickey for core from 10.0.0.1 port 42100 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:56.880906 sshd-session[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:56.884993 systemd-logind[1542]: New session 22 of user core. Jan 13 20:46:56.892738 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:46:58.872789 sshd[6035]: Connection closed by 10.0.0.1 port 42100 Jan 13 20:46:58.873726 sshd-session[6029]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:58.881641 systemd[1]: Started sshd@22-10.0.0.149:22-10.0.0.1:42102.service - OpenSSH per-connection server daemon (10.0.0.1:42102). Jan 13 20:46:58.882133 systemd[1]: sshd@21-10.0.0.149:22-10.0.0.1:42100.service: Deactivated successfully. Jan 13 20:46:58.884797 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:46:58.886270 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:46:58.887347 systemd-logind[1542]: Removed session 22. Jan 13 20:46:58.923135 sshd[6050]: Accepted publickey for core from 10.0.0.1 port 42102 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:58.925061 sshd-session[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:58.929649 systemd-logind[1542]: New session 23 of user core. Jan 13 20:46:58.939948 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:46:59.225230 sshd[6056]: Connection closed by 10.0.0.1 port 42102 Jan 13 20:46:59.226518 sshd-session[6050]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:59.240729 systemd[1]: Started sshd@23-10.0.0.149:22-10.0.0.1:42112.service - OpenSSH per-connection server daemon (10.0.0.1:42112). Jan 13 20:46:59.241219 systemd[1]: sshd@22-10.0.0.149:22-10.0.0.1:42102.service: Deactivated successfully. Jan 13 20:46:59.244304 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:46:59.245192 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:46:59.246434 systemd-logind[1542]: Removed session 23. Jan 13 20:46:59.273661 sshd[6063]: Accepted publickey for core from 10.0.0.1 port 42112 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:46:59.275220 sshd-session[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:46:59.280141 systemd-logind[1542]: New session 24 of user core. Jan 13 20:46:59.285667 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:46:59.413015 sshd[6069]: Connection closed by 10.0.0.1 port 42112 Jan 13 20:46:59.413374 sshd-session[6063]: pam_unix(sshd:session): session closed for user core Jan 13 20:46:59.417589 systemd[1]: sshd@23-10.0.0.149:22-10.0.0.1:42112.service: Deactivated successfully. Jan 13 20:46:59.420152 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:46:59.420159 systemd-logind[1542]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:46:59.421512 systemd-logind[1542]: Removed session 24. Jan 13 20:47:01.382820 kubelet[2805]: E0113 20:47:01.382772 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:47:02.382758 kubelet[2805]: E0113 20:47:02.382715 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:47:04.429680 systemd[1]: Started sshd@24-10.0.0.149:22-10.0.0.1:42114.service - OpenSSH per-connection server daemon (10.0.0.1:42114). Jan 13 20:47:04.521505 sshd[6089]: Accepted publickey for core from 10.0.0.1 port 42114 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:47:04.523105 sshd-session[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:04.527406 systemd-logind[1542]: New session 25 of user core. Jan 13 20:47:04.533626 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:47:04.644539 sshd[6092]: Connection closed by 10.0.0.1 port 42114 Jan 13 20:47:04.644929 sshd-session[6089]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:04.650476 systemd[1]: sshd@24-10.0.0.149:22-10.0.0.1:42114.service: Deactivated successfully. Jan 13 20:47:04.653219 systemd-logind[1542]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:47:04.653328 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:47:04.654260 systemd-logind[1542]: Removed session 25. Jan 13 20:47:07.598046 systemd[1]: run-containerd-runc-k8s.io-d9221ed7e1e0e63be47d9d965ec95dfbb0fc21762d05fa1f6ce2f87b3cdf3ffd-runc.OXzEC7.mount: Deactivated successfully. Jan 13 20:47:09.655634 systemd[1]: Started sshd@25-10.0.0.149:22-10.0.0.1:41274.service - OpenSSH per-connection server daemon (10.0.0.1:41274). Jan 13 20:47:09.686827 sshd[6124]: Accepted publickey for core from 10.0.0.1 port 41274 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:47:09.688185 sshd-session[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:09.692240 systemd-logind[1542]: New session 26 of user core. Jan 13 20:47:09.699644 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:47:09.823085 sshd[6127]: Connection closed by 10.0.0.1 port 41274 Jan 13 20:47:09.852816 sshd-session[6124]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:09.857098 systemd[1]: sshd@25-10.0.0.149:22-10.0.0.1:41274.service: Deactivated successfully. Jan 13 20:47:09.859226 systemd-logind[1542]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:47:09.859347 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:47:09.860618 systemd-logind[1542]: Removed session 26. Jan 13 20:47:14.838776 systemd[1]: Started sshd@26-10.0.0.149:22-10.0.0.1:59118.service - OpenSSH per-connection server daemon (10.0.0.1:59118). Jan 13 20:47:14.952766 sshd[6142]: Accepted publickey for core from 10.0.0.1 port 59118 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:47:14.954196 sshd-session[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:14.958041 systemd-logind[1542]: New session 27 of user core. Jan 13 20:47:14.967757 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:47:15.078002 sshd[6145]: Connection closed by 10.0.0.1 port 59118 Jan 13 20:47:15.078359 sshd-session[6142]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:15.082365 systemd[1]: sshd@26-10.0.0.149:22-10.0.0.1:59118.service: Deactivated successfully. Jan 13 20:47:15.084826 systemd-logind[1542]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:47:15.084922 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:47:15.085802 systemd-logind[1542]: Removed session 27. Jan 13 20:47:20.086793 systemd[1]: Started sshd@27-10.0.0.149:22-10.0.0.1:59132.service - OpenSSH per-connection server daemon (10.0.0.1:59132). Jan 13 20:47:20.119483 sshd[6203]: Accepted publickey for core from 10.0.0.1 port 59132 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:47:20.121302 sshd-session[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:20.125862 systemd-logind[1542]: New session 28 of user core. Jan 13 20:47:20.132715 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:47:20.246178 sshd[6207]: Connection closed by 10.0.0.1 port 59132 Jan 13 20:47:20.246570 sshd-session[6203]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:20.250350 systemd[1]: sshd@27-10.0.0.149:22-10.0.0.1:59132.service: Deactivated successfully. Jan 13 20:47:20.253122 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:47:20.253215 systemd-logind[1542]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:47:20.254878 systemd-logind[1542]: Removed session 28. Jan 13 20:47:20.382581 kubelet[2805]: E0113 20:47:20.382452 2805 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:47:25.261756 systemd[1]: Started sshd@28-10.0.0.149:22-10.0.0.1:40320.service - OpenSSH per-connection server daemon (10.0.0.1:40320). Jan 13 20:47:25.294756 sshd[6219]: Accepted publickey for core from 10.0.0.1 port 40320 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:47:25.296546 sshd-session[6219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:25.301061 systemd-logind[1542]: New session 29 of user core. Jan 13 20:47:25.308879 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:47:25.420959 sshd[6223]: Connection closed by 10.0.0.1 port 40320 Jan 13 20:47:25.421373 sshd-session[6219]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:25.425674 systemd[1]: sshd@28-10.0.0.149:22-10.0.0.1:40320.service: Deactivated successfully. Jan 13 20:47:25.428092 systemd-logind[1542]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:47:25.428178 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:47:25.429430 systemd-logind[1542]: Removed session 29. Jan 13 20:47:30.430788 systemd[1]: Started sshd@29-10.0.0.149:22-10.0.0.1:40328.service - OpenSSH per-connection server daemon (10.0.0.1:40328). Jan 13 20:47:30.467742 sshd[6237]: Accepted publickey for core from 10.0.0.1 port 40328 ssh2: RSA SHA256:uJ7Cm0ZiB1cKFsV9zv9H+G33T+grLCcYOUFbEs15LGg Jan 13 20:47:30.469513 sshd-session[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:47:30.475116 systemd-logind[1542]: New session 30 of user core. Jan 13 20:47:30.480879 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 20:47:30.593661 sshd[6241]: Connection closed by 10.0.0.1 port 40328 Jan 13 20:47:30.593985 sshd-session[6237]: pam_unix(sshd:session): session closed for user core Jan 13 20:47:30.597822 systemd[1]: sshd@29-10.0.0.149:22-10.0.0.1:40328.service: Deactivated successfully. Jan 13 20:47:30.600528 systemd-logind[1542]: Session 30 logged out. Waiting for processes to exit. Jan 13 20:47:30.600716 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 20:47:30.602081 systemd-logind[1542]: Removed session 30.